04.01.2013 Views

PROCEEDINGS Volume 1 - ineag

PROCEEDINGS Volume 1 - ineag

PROCEEDINGS Volume 1 - ineag

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>PROCEEDINGS</strong> of the 8 th International Conference on Applied Financial<br />

Economics - <strong>Volume</strong> 1<br />

Samos Island, GREECE, 30 June - 02 July 2011<br />

EDITED BY<br />

Research and Training Institute of the East Aegean (INEAG), Greece<br />

EDITOR<br />

Chrysovaladis Prachalias<br />

PUBLISHED BY<br />

National and Kapodistrian University of Athens, Greece<br />

ISBN: 978-960-466-085-8<br />

ISBN [SET]: 978-960-466-087-2<br />

ISSN: 1790-3912


ACKNOWLEDGMENTS<br />

The editors would like to thank all contributing authors to this book for their effort to prepare their<br />

submissions and presentations. We would like to thank the personnel of National and Kapodistrian<br />

University of Athens for their kind offer to publish the proceedings of 8 th AFE Samos 2011.<br />

The editors would like to thank the scientific committee and the reviewers who read carefully and<br />

reviewed all contributions. The conference committee would like to thank for their support and kind<br />

cooperation the Hellenic Telecommunications Organization.<br />

2


MESSAGE FROM THE COORDINATOR OF THE SCIENTIFIC COMMITTEE<br />

Dear Participants,<br />

On behalf of the Scientific Committee I would like to warmly welcome you all to the 8th<br />

International Applied Financial Economics (AFE) Conference. Despite their short history the<br />

AFE International Conferences, co-organized this year by INEAG and the National and<br />

Kapodistrian University of Athens, have already gained a worldwide reputation and have been<br />

established as a forum in which academics, researchers and professional experts in the field of<br />

finance from all over the world come together, interact, exchange ideas, and present their<br />

research. A direct reflection of this success story is the number of submitted papers, which has<br />

been increasing substantially year by year, as well as their high quality. There is little doubt that<br />

scientifically, as well as in terms of participation, this year’s conference is indeed beyond<br />

expectations, as the Scientific Committee has received more than one hundred and fifty very<br />

interesting research papers, including one co-authored by a Nobel Prize laureate, while<br />

presenters come from over twenty countries.<br />

Over the last year there are signs that the world economy is recovering from the 2009 shock, but<br />

at a much faster pace in the Far East than in the West. However, the international financial<br />

system has not been stabilized yet and given that the world debt has increased up to about 80%<br />

of the world GDP, further trouble to the financial system may be present in the medium to long<br />

term. On the other hand Greece is in the midst of an acute crisis with an increased debt combined<br />

with negative growth. It is noteworthy that several of the submitted manuscripts cope with all<br />

that, clearly indicating the fast adaptation of this conference not only to current international<br />

trends in financial research, but also to major real economic events and the developments in the<br />

international financial markets. In that sense this conference offers a unique opportunity for firsthand<br />

knowledge on the most updated financial research and I am certain that we will all take<br />

advantage of this opportunity.<br />

Wishing you all a very pleasant and fruitful stay on the beautiful island of Samos.<br />

Alexandros E. Milionis<br />

Assistant Professor<br />

University of the Aegean<br />

Department of Statistics and Actuarial - Financial Mathematics<br />

3


Contents Page<br />

PREFACE..............................................................................................................................................3<br />

CONTENTS ...........................................................................................................................................4<br />

SCIENTIFIC COMMITTEE...................................................................................................................10<br />

STEERING COMMITTEE.....................................................................................................................10<br />

CONFERENCE COORDINATOR AND SECRETARIAT............................................................................11<br />

KEYNOTE SPEAKERS .........................................................................................................................11<br />

KEYNOTE LECTURES ................................................................................................................ 12<br />

PORTFOLIO SELECTION: A REVIEW..................................................................................................... 14<br />

Jerome Detemple<br />

ENHANCING FILTERED HISTORICAL SIMULATIONS ............................................................................. 29<br />

Giovanni Barone-Adesi<br />

INTERNATIONAL PROPAGATION OF THE CREDIT CRISIS....................................................................... 32<br />

Ian Cooper<br />

SHORT WAVELENGTH FINANCE: WHAT, WHO AND WHY.................................................................... 69<br />

Nick Kondakis<br />

ASSET ALLOCATION-PORTFOLIO MANAGEMENT.......................................................... 70<br />

ON THE PERFORMANCE OF A HYBRID GENETIC ALGORITHM: APPLICATION ON THE PORTFOLIO<br />

MANAGEMENT PROBLEM ................................................................................................................... 72<br />

Vassilios Vassiliadis, Vassiliki Bafa, George Dounias<br />

ASSET PRICING............................................................................................................................ 80<br />

ASSET PRICING IN THE CYPRUS STOCK EXCHANGE: ARE STOCKS FAIRLY PRICED? ............................ 82<br />

Haritini Tsangari, Maria Elfani<br />

THE VALUATION OF EQUITIES WHEN SHAREHOLDERS ENJOY LIMITED LIABILITY.............................. 90<br />

Jo Wells<br />

PRICE PRESSURE RISK FACTOR IN CONVERTIBLE BONDS.................................................................. 100<br />

Nikolay Ryabkov, Galyna Petrenko<br />

THE PRICING OF EQUITY-LINKED CONTINGENT CLAIMS UNDER A LOGNORMAL SHORT RATE DYNAMICS<br />

......................................................................................................................................................... 110<br />

Rosa Cocozza, Antonio De Simone<br />

4


LIQUIDITY AND EXPECTED RETURNS: NEW EVIDENCE FROM DAILY DATA 1926-2008...................... 120<br />

M. Reza Baradaran, Maurice Peat<br />

BANKING...................................................................................................................................... 130<br />

ELECTRONIC BANKING IN JORDAN: A FRAMEWORK OF ADOPTION ................................................... 132<br />

Muneer Abbad, Juma’h Abbad, Faten Jaber<br />

PROCYCLICALITY OF BANKS’ CAPITAL BUFFER IN ASIAN COUNTRIES .............................................. 142<br />

Elis Deriantino<br />

DETERMINING EFFECTIVE INDICATORS FOR CUSTOMER PERFORMANCE IN REPAYING BANK LOAN (CASE<br />

STUDY: IRANIAN BANKS) ................................................................................................................. 147<br />

Fariba SeyedJafar Rangraz, Naser Shams<br />

TWO-STAGE DEA APPROACH TO EVALUATE THE EFFICIENCY OF BANK BRANCHES......................... 155<br />

Akram Bodaghi & Heidar Mostakhdemin Hosseini<br />

MODELLING FUNCTIONAL INDICATORS FOR CUSTOMER PERFORMANCE IN REPAYING BANK LOAN (CASE<br />

STUDY: IRANIAN BANKS) ................................................................................................................. 160<br />

Fariba Seyed Jafar Rangraz, Jafar Pashami<br />

MEASURING AGREEMENT WITH THE WEIGHTED KAPPA COEFFICIENT OF CREDIT RATING DECISIONS: A<br />

CASE OF BANKING SECTOR .............................................................................................................. 165<br />

Funda H. Sezgin, Ozlen Erkal<br />

THE GLOBAL FINANCIAL CRISIS AND THE BANKING SECTOR IN SERBIA............................................ 173<br />

Emilija Vuksanović, Violeta Todorović<br />

COLLECTION MANAGEMENT AS CRUCIAL PART OF CREDIT RISK MANAGEMENT DURING THE CRISIS. 182<br />

Lidija Barjaktarovic, Snezana Popovcic- Avric, Marina Djenic<br />

HOW THE CROSS BORDER MERGERS AND ACQUISITIONS OF THE GREEK BANKS IN THE BALKAN AREA<br />

IMPROVE THE COURSE OF PROFITABILITY, EFFICIENCY AND LIQUIDITY INDEXES OF THEM?............. 192<br />

Kyriazopoulos George, Petropoulos Dimitrios<br />

PERFORMANCE OF ISLAMIC BANKS ACROSS THE WORLD: AN EMPIRICAL ANALYSIS OVER THE PERIOD<br />

2001-2008........................................................................................................................................ 202<br />

Sandrine Kablan, Ouidad Yousfi<br />

THE DETERMINANTS OF THE EAD .................................................................................................... 208<br />

Yenni Redjah, Jean Roy, Inmaculada Buendía Martínez<br />

A MACRO-BASED MODEL OF PD AND LGD IN STRESS TESTING LOAN LOSSES .................................. 215<br />

Esa Jokivuolle, Matti Virén<br />

XTREME CREDIT RISK MODELS: IMPLICATIONS FOR BANK CAPITAL BUFFERS.................................. 222<br />

David E. Allen, Akhmad R. Kramadibrata, Robert J. Powell, Abhay K. Singh<br />

5


COMMODITIES MARKETS...................................................................................................... 230<br />

VOLATILITY DYNAMICS IN DUBAI GOLD FUTURES MARKET ............................................................ 232<br />

Ramzi Nekhili, Michael Thorpe<br />

STOCK RETURNS AND OIL PRICE BASED TRADING............................................................................ 241<br />

Michael Soucek<br />

OPTIMAL LEVERAGE AND STOP LOSS POLICIES FOR FUTURES INVESTMENTS.................................... 248<br />

Rainer A. Schüssler<br />

THE IMPACT OF INTERNATIONAL MARKET EFFECTS AND PURE POLITICAL RISK ON THE UK, EMU AND<br />

USA OIL AND GAS STOCK MARKET SECTORS .................................................................................. 257<br />

John Simpson<br />

THE PROGRESS OF NATURAL GAS MARKET LIBERALISATION IN THE UK AND USA: DE-COUPLING OF OIL<br />

AND GAS PRICES .............................................................................................................................. 264<br />

John Simpson<br />

QUANTITATIVE EASING ENGINEERED BY THE FED, AND PRICES OF INTERNATIONALLY TRADED AND<br />

DOLLAR DENOMINATED COMMODITIES AND PRECIOUS METALS...................................................... 274<br />

Gueorgui I. Kolev<br />

AGRI-BUBBLES ABSORB CHEAP MONEY: THEORIES AND PRACTICE IN CONTEMPORARY WORLD ..... 281<br />

Evdokimov Alexandre Ivanovich, Soboleva Olga Valerjevna<br />

THE INFORMATION CONTENT OF IMPLIED VOLATILITY IN THE CRUDE OIL FUTURES MARKET.......... 290<br />

Asyl Bakanova<br />

LITHIUM POWER: IS THE FUTURE GREEN? ........................................................................................ 300<br />

Satyarth Pandey, Veena Choudhary<br />

CORPORATE FINANCE ............................................................................................................ 308<br />

PREDICTING BUSINESS FAILURE USING DATA-MINING METHODS .................................................... 310<br />

Sami BEN JABEUR, Youssef FAHMI<br />

SIMULTANEOUS DETERMINATION OF CORPORATE DECISIONS: AN EMPIRICAL INVESTIGATION USING UK<br />

PANEL DATA .................................................................................................................................... 318<br />

Qingwei Meng<br />

LEVERAGE ADJUSTMENT AND COST OF CAPITAL.............................................................................. 321<br />

Sven Husmann, Michael Soucek, Antonina Waszczuk<br />

DOES LEVERAGE AFFECT LABOUR PRODUCTIVITY? A COMPARATIVE STUDY OF LOCAL AND<br />

MULTINATIONAL COMPANIES OF THE BALTIC COUNTRIES................................................................ 329<br />

Mari Avarmaa, Aaro Hazak, Kadri Männasoo<br />

MULTISTAGE INVESTMENT OPTIONS, TIME-TO-BUILD AND FINANCING CONSTRAINTS..................... 344<br />

6


Elettra Agliardi, Nicos Koussis<br />

RE-EXAMINING CAPITAL STRUCTURE TESTS:AN EMPIRICAL ANALYSIS IN THE AIRLINE INDUSTRY.. 355<br />

Kruse Sebastian, Kalogeras Nikos, Semeijn Janjaap<br />

DEUNDERSTANDING THE DIFFERENCE IN PREMIUM PAID IN ACQUISITION OF PUBLIC COMPANIES.... 367<br />

Nahum Biger, Eli Ziskind<br />

THE SPECIALITIES OF THE SMALL- AND MEDIUM-SIZE ENTERPRISES’ FINANCING AND THE<br />

DETERMINANTS OF THEIR GROWTH IN HUNGARY.......................................................................... 377<br />

Zsuzsanna Széles, Zoltán Szabó<br />

CORPORATE GOVERNANCE .................................................................................................. 388<br />

CORPORATE GOVERNANCE PRACTICES AND THEIR IMPACT ON FIRM’S CAPITAL STRUCTURE AND<br />

PERFORMANCE; CASE OF PAKISTANI TEXTILE SECTOR ..................................................................... 390<br />

Hayat M. Awan. Khuram Shahzad Bukahri, Rameez Mahmood Ansari<br />

CORPORATE GOVERNANCE AND COMPLIANCE WITH IFRSS - MENA EVIDENCE................................ 401<br />

Marwa Hassaan, Omneya Abdelsalam<br />

BUSINESS PERFORMANCE EVALUATION MODELS AND DECISION SUPPORT SYSTEM FOR THE ELECTRONIC<br />

INDUSTRY ........................................................................................................................................ 411<br />

Wu Wen<br />

APPLYING THE CONCEPT OF FAIR VALUE AT BALANCE SHEET ITEMS. THE CASE OF ROMANIA ........ 418<br />

Marinela – Daniela Manea<br />

THE IMPACT OF THE INTELLECTUAL CAPITAL ON THE COST OF CAPITAL: BRICS CASE .................... 429<br />

Elvina R. Bayburina, Alexandra Brainis<br />

BRAND VALUE DRIVERS OF THE LARGEST BRICS COMPANIES......................................................... 440<br />

Elvina R. Bayburina, Nataliya Chernova<br />

GOVERNANCE, BOARD INDEPENDENCE, SUB COMMITTEES AND FIRM PERFORMANCE: EVIDENCE FROM<br />

AUSTRALIA ...................................................................................................................................... 451<br />

Wanachan Singh, Robert T Evans, John P Evans<br />

COST-BENEFIT ANALYSIS OF THE FINANCIAL STATEMENTS CONVERSION: A CASE STUDY FROM THE<br />

CZECH REPUBLIC ............................................................................................................................. 464<br />

David Procházka<br />

NEGOTIATING SUCCESSION STRATEGY IN FAMILY RUN SME’S ......................................................... 473<br />

Christopher Milner<br />

CREDIT RATING MODEL FOR SMALL AND MEDIUM ENTERPRISES WITH ORDERED LOGIT REGRESSION: A<br />

CASE OF TURKEY ............................................................................................................................. 483<br />

Özlen ERKAL, Tugcen HATİPOĞLU<br />

7


PRODUCTS WITH LONG AGING PERIOD IN THE AGRO-FOOD SYSTEM: ANALYSIS OF MEAT SECTOR .. 490<br />

Mattia Iotti, Giuseppe Bonazzi, Vlassios Salatas<br />

EMPLOYEE STOCK OPTIONS INCENTIVE EFFECTS: A CPT-BASED MODEL.......................................... 499<br />

Hamza BAHAJI<br />

EMERGING MARKETS.............................................................................................................. 510<br />

COMOVEMENTS IN THE VOLATILITY OF EMERGING EUROPEAN STOCK MARKETS............................. 512<br />

Radu Lupu, Iulia Lupu<br />

OWNERSHIP STRUCTURE, CASH CONSTRAINTS AND INVESTMENT BEHAVIOUR IN RUSSIAN FIRMS.... 522<br />

Tullio Buccellato, Gian Fazio, Yulia Rodionova<br />

MORTGAGE HOUSING CREDITING IN RUSSIA: CRISIS OVERCOMING AND FURTHER DEVELOPMENT<br />

PERSPECTIVES .................................................................................................................................. 523<br />

Liudmila Guzikova<br />

FOREIGN INVESTORS’ INFLUENCE TOWARDS SMALL STOCK EXCHANGES BOOM AND BUST:<br />

MACEDONIAN STOCK EXCHANGE CASE............................................................................................ 532<br />

Dimche Lazarevski<br />

GENDER BIAS IN HIRING, ACCESS TO FINANCE AND FIRM PERFORMANCE: EVIDENCE FROM<br />

INTERNATIONAL DATA..................................................................................................................... 540<br />

Nigar Hashimzade, Yulia Rodionova<br />

INTELLECTUAL CAPITAL DIMENSIONS AND QUOTED COMPANIES IN TEHRAN EXCHANGE, BASED ON<br />

BOZBURA MODEL............................................................................................................................. 541<br />

Rasoul Abdi, Nader Rezaei, Yagoub Amirdalire Bonab<br />

EMPIRICAL FINANCE............................................................................................................... 552<br />

ANALYZING THE LING BETWEEN U.S. CREDIT DEFAULT SWAP SPREADS AND MARKET RISK: A 3-D<br />

COPULA FRAMEWORK ...................................................................................................................... 554<br />

Hayette Gatfaoui<br />

MODELLING OF LINKAGES BETWEEN STOCK MARKETS INCLUDING THE EXCHANGE RATE DYNAMICS 562<br />

Malgorzata Doman, Ryszard Doman<br />

PROPAGATION OF SHOCKS IN GLOBAL STOCK MARKET: IMPULSE RESPONSE ANALYSIS IN A COPULA<br />

FRAMEWORK.................................................................................................................................... 572<br />

Ryszard Doman, Malgorzata Doman<br />

STOCK SPLITS AND HERDING............................................................................................................ 582<br />

Maria Chiara Iannino<br />

8


TIME-VARYING¬ BETA RISK FOR TRADING STOCKS OF TEHRAN STOCK EXCHANGE IN IRAN: (A<br />

COMPARISON OF ALTERNATIVE MODELLING TECHNIQUES).............................................................. 592<br />

Majid Mirzaee Ghazani<br />

9


SCIENTIFIC COMMITTEE<br />

Dr. Ian Cooper Professor, London Business School, UK<br />

Dr. Jerome Detemple Professor, Boston University, School of Management, USA<br />

Dr. Giovanni Barone -<br />

Adesi<br />

Dr. Tompaidis Stathis<br />

Dr. Alireza Tourani-<br />

Rad<br />

Dr. Floros Christos<br />

Dr. Gatfaoui Hayette<br />

Dr. Milionis<br />

Alexandros<br />

Dr. Kondakis Nick<br />

Dr. Chaker Aloui<br />

STEERING COMMITTEE<br />

Prachalias<br />

Chrysovaladis<br />

Dr. Papadimitriou<br />

Athanasios<br />

Professor, University of Lugano and Swiss Finance Institute, Switzerland<br />

Associate Professor, University of Texas at Austin, McCombs School of<br />

Business, Department of Finance, USA<br />

Professor, Auckland University of Technology, Department of Finance,<br />

Auckland, New Zealand<br />

Senior Lecturer, University of Portsmouth, Business School, Department of<br />

Economics, UK<br />

Tenured Associate Professor University Paris I, Economics & Finance<br />

Department, France<br />

Coordinator of Scientific committee, Assistant Professor, University of<br />

the Aegean and Bank of Greece<br />

President and Managing Director of Kepler Asset Management LLC, New<br />

York, USA<br />

Professor, Faculty of Management and Economic Sciences of Tunis, Tunisia,<br />

Director of the International Finance Group-Tunisia, Tunisia<br />

Conference Director, PhD candidate, National & Kapodistrian University of<br />

Athens, INEAG<br />

Lecturer, Faculty of Business Administration University of Piraeus,<br />

Giasla Evangelia MSc, National & Kapodistrian University of Athens, Greece<br />

Dr. Papanagiotou<br />

Evangelia<br />

Dr. Christopoulos<br />

Apostolos<br />

University of the Aegean, Greece<br />

Lecturer, National & Kapodistrian University of Athens, Faculty of<br />

Economics<br />

Mamzeridou Efi Mathematician, MSc from University of Sheffield<br />

10


CONFERENCE COORDINATOR AND SECRETARIAT<br />

Prachalias<br />

Chrysovaladis<br />

Conference Administrative Director , Research and Training Institute of East<br />

Aegean<br />

KEYNOTE SPEAKERS<br />

Dr. Ian Cooper Professor, London Business School, UK<br />

Dr. Jerome Detemple Professor, Boston University, School of Management, USA<br />

Dr. Giovanni Barone -<br />

Adesi<br />

Dr. Kondakis Nick<br />

Professor, University of Lugano and Swiss Finance Institute, Switzerland<br />

President and Managing Director of Kepler Asset Management LLC, New<br />

York, USA<br />

11


SPECIAL KEYNOTE SPEECHES<br />

12


PORTFOLIO SELECTION: A REVIEW<br />

Jérôme Detemple<br />

Boston University School of Management and CIRANO, Boston, USA<br />

Email: rindisbmbu.edu<br />

Keynote lecture: Conference on Applied Financial Economics<br />

April 25, 2011<br />

Abstract. This paper reviews portfolio selection models and provides perspective on some open issues. It starts with a review of the<br />

classic Markowitz mean-variance framework. It then presents the intertemporal portfolio choice approach developed by Merton and<br />

the fundamental notion of dynamic hedging. Martingale methods and resulting portfolio formulas are also reviewed. Their usefulness<br />

for economic insights and numerical implementations is illustrated. Areas of future research are outlined.<br />

Key words: Portfolio choice, Mean-variance model, Diffusion models, Complete markets, Monte Carlo simulation, Malliavin<br />

derivative, Dynamic hedging, Bond numeraire.<br />

1 Introduction<br />

A question of long-standing interest in finance pertains to the optimal allocation of funds among various financial<br />

assets available, in order to sustain lifetime consumption and bequest. The answer to this question is important for<br />

practical purposes, both from an institutional and an individual point of view. Mutual funds, pension funds, hedge<br />

funds and other institutions managing large portfolios are routinely confronted with this type of decision. Individuals<br />

planning for retirement are also concerned about the implications of their choices. Quantitative portfolio models<br />

help to address various issues of relevance to the parties involved.<br />

Mean-variance analysis, introduced by Markowitz (1952), has long been a popular approach to determine the<br />

structure and composition of an optimal portfolio. This type of analysis, unfortunately, suffers from several<br />

shortcomings. It suggests, in particular, optimal portfolios that are independent of an investor’s horizon. A rigorous<br />

dynamic analysis of the consumption-portfolio choice problem, as originally carried out by Merton (1969, 1971),<br />

reveals some of the missing ingredients. It shows that optimal portfolios should include, in addition to meanvariance<br />

terms, dynamic hedging components designed to insure against fluctuations in the opportunity set.<br />

Merton’s analysis highlights the restrictive nature of mean-variance portfolios. Policies of this type are only optimal<br />

under extreme circumstances, namely for investors with logarithmic utility (who display myopic behavior) or when<br />

opportunity sets are deterministic (means and variances of asset returns do not vary stochastically). It also shows<br />

that dynamic hedging terms depend on an investor’s horizon and modulate the portfolio composition as the<br />

individual ages.<br />

Merton’s portfolio formula is based on a partial differential equation (PDE) characterization of the value<br />

function associated with the consumption-portfolio choice problem. This type of characterization, while leading to<br />

interesting economic insights, presents challenges for implementation. PDEs are indeed notoriously difficult (if not<br />

impossible) to solve numerically in the case of high-dimensional problems. This precludes implementations for large<br />

scale investment models with many assets and state variables, and for investors with wealth-dependent relative risk<br />

aversion.1<br />

An alternative characterization of optimal portfolios is obtained by using probabilistic concepts and methods,<br />

introduced with the advent of the martingale approach. Major contributions, by Pliska (1986), Karatzas, Lehoczky<br />

and Shreve (1987) and Cox and Huang (1989), lead to the identification of explicit solutions for optimal<br />

1 Brennan, Schwartz and Lagnado (1997) provide numerical results for a class of low dimentional problems when utilities are constant relative<br />

risk averse.<br />

14


consumption and bequest. Optimal portfolio formulas are derived by Ocone and Karatzas (1991) for Ito processes<br />

and Detemple, Garcia and Rindisbacher (2003) for diffusions. These formulas take the form of conditional<br />

expectations of random variables that are explicitly identified and involve auxiliary factors which, in diffusion<br />

models, solve stochastic differential equations (SDEs). For implementation, Monte Carlo simulation is naturally<br />

suggested by the structure of these expressions.<br />

The martingale approach helps to pinpoint the motivation behind dynamic hedging, namely the stochastic<br />

fluctuations in the interest rate and the market price of risk. Random changes in these variables give rise to an<br />

interest rate hedge and a market price of risk hedge. Alternatively, following Sorensen (1999), Lioui and Poncet<br />

(2001, 2003), Munk and Sorensen (2004) and Detemple and Rindisbacher (2010), it is also possible to rewrite the<br />

hedges using long term bonds as units of account. From this perspective, the hedging behavior is seen to be<br />

motivated by instantaneous fluctuations in a menu of long term bond prices and in the market price of risk expressed<br />

in bond numeraire. The resulting formula highlights the intrinsic interest in long term bonds for asset allocation.<br />

This paper reviews the different characterizations of optimal consumption-portfolio policies derived in the<br />

literature. Section 2 describes the classic mean-variance Markowitz model. Section 3 reviews the dynamic model<br />

developed by Merton. Characterizations of optimal policies and numerical implementations based on martingale<br />

methods are presented in Section 4. Alternative formulas obtained by using long term bonds as numeraires are in<br />

Section 5. Open issues and perspectives for future research are outlined in the concluding section.<br />

2 The mean-variance model<br />

The canonic static model goes back to Markowitz (1952). The model considers an investor who maximizes the<br />

expected utility of terminal wealth<br />

subject to the wealth constraint<br />

Preferences in (1) can be written in terms of a utility function . The coefficient<br />

is an investor specific preference parameter which determines risk tolerance. Initial wealth is . Terminal wealth in<br />

(2) is the result of investing the vector of amounts in the risky assets available and initial wealth net of total risky<br />

investments in the riskless asset. The vector is the vector of fractions of initial wealth in the risky<br />

assets. The vector is the vector of random rates of returns on the risky assets and is the sure rate of return on the<br />

riskless asset. The vector of means of the risky rates of returns is . The variance-covariance matrix is .<br />

Unconstrained maximization with respect to gives the first order condition<br />

Simple derivations lead to the solution,<br />

Theorem 1. (Markowitz (1952)). The optimal portfolio solving the quadratic optimization problem (1)-(2) is<br />

The optimal amount invested in the riskless asset is . The fraction of initial<br />

wealth in the riskless asset is .<br />

The optimal portfolio is linear in expected excess returns and inversely related to the variance-covariance matrix. It<br />

reflects a trade-off between the means and variances/covariances of the excess returns on the risky assets.<br />

15<br />

(1)<br />

(2)<br />

(3)<br />

(4)


Alternatively, one can also decompose the investor choice problem in two parts, the determination of the set of<br />

efficient portfolio and the selection of the best efficient portfolio.<br />

The first of these problems, , is the combination of two sub-problems. The first determines the efficient<br />

set of risky assets. The second identifies the efficient set of risky and riskless assets. Sub-problem is<br />

and has the solution<br />

for . The set of efficient risky portfolios is . The efficient frontier is the<br />

function where .<br />

Sub-problem consists in<br />

and has a solution which satisfies<br />

At the maximum, the efficient frontier of risky assets is tangent to the line generated by combining the best portfolio<br />

of risky assets with the riskless asset. The (overall) efficient frontier is the line<br />

where<br />

is the Sharpe ratio of the tangency portfolio of risky assets and is its variance. Any variance level can be<br />

achieved by combining the tangency portfolio with the riskless asset.<br />

The second problem, , is to maximize preferences subject to selecting portfolios in the efficient set<br />

where is initial wealth. This leads to the condition<br />

16<br />

(5)<br />

(6)


which corresponds to the original solution.<br />

Figure 1 illustrates the principles at work. The tangent portfolio represents the optimal choice among risky assets.<br />

This risky portfolio is then combined with the riskless asset to form the efficient line. The optimal allocation<br />

between the risky portfolio and the riskless asset is determined by the tangency point with the indifference curve.<br />

Mean<br />

0.5<br />

0.45<br />

0.4<br />

0.35<br />

0.3<br />

0.25<br />

0.2<br />

0.15<br />

0.1<br />

Optimal portfolio<br />

Efficient frontier and optimal portfolio<br />

Tangent portfolio<br />

0.05<br />

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5<br />

Standard deviation<br />

Figure 1: Efficient line, optimal indifference curve and optimal portfolio for mean-variance preferences. There are two risky assets with expected<br />

rates of return , returns variances and returns covariance . The riskless rate of return is .<br />

Although the mean-variance preferences (1) are intuitive and easy to work with, they also have undesirable<br />

properties. One drawback is that additional wealth becomes undesirable beyond a certain level (marginal utility of<br />

wealth becomes negative). Another issue is their inconsistency with second order stochastic dominance (Rothschild<br />

and Stiglitz (1970)). A final element is the non-linear structure with respect to probabilities. This property limits<br />

their applicability to dynamic choice settings (time-consistency problem).<br />

3 Dynamic portfolio choice: dynamic programing<br />

Another important limitation of the mean-variance model is the implicit assumption of a constant opportunity set<br />

(the parameters are assumed to be constant). Yet, a wide body of empirical evidence points to stochastic<br />

variations in means and variances of asset returns. The dynamic model developed by Merton (1969, 1971) is<br />

designed to incorporate this important feature of the data.<br />

The model is cast in continuous time. The financial market is comprised of risky assets and riskless asset. The<br />

riskless asset is a money market account with instantaneous interest rate , where is a vector of state<br />

variables. The vector of instantaneous rates of returns on the risky assets and the vector of state variables satisfy<br />

where is the volatility matrix of asset returns, which is assumed to be invertible. The coefficient<br />

is the vector of market prices of risk (the risk premia per unit risk) and is the<br />

17<br />

(7)<br />

(8)


increment in a -dimensional Brownian motion. Invertibility of ensures that the market price of risk is<br />

uniquely defined. It essentially implies that the market is complete, in the sense that all risks are hedgeable. The drift<br />

and volatility of the process followed by state variables are assumed to satisfy the usual<br />

conditions for existence of a unique strong solution (see Karatzas and Shreve (1991), p. 338).<br />

The investor maximizes<br />

subject to the constraints<br />

for all . The preferences, in (9), are von Neumann-Morgenstern and indicate that the individual derives<br />

utility from intermediate consumption (at time ) and from terminal wealth (bequest) . Equation (10) shows<br />

the evolution of wealth when a consumption-portfolio policy is pursued. The inequalities in (11) capture the<br />

physical requirement that consumption be non-negative and the financial requirement that terminal wealth be nonnegative.<br />

The utility and bequest functions are assumed to be strictly increasing, strictly concave and<br />

twice continuously differentiable, and to satisfy the Inada conditions at zero and infinity. Marginal utility and<br />

bequest functions have the inverses . The time argument in these functions stands for a<br />

subjective discount factor, which is assumed to be deterministic.<br />

Merton’s characterization of the optimal policies is given next,<br />

Theorem 2. (Merton (1971)). Let be the value function for the consumption-portfolio problem and let<br />

be its first and second partial derivatives with respect to wealth and the<br />

vector of state variables. Optimal consumption and bequest are<br />

(12)<br />

The optimal portfolio has two components, a mean-variance term and a dynamic hedging term . The<br />

optimal amount invested is where<br />

The value function solves the partial differential equation<br />

where<br />

and subject to the boundary conditions and .<br />

In this dynamic environment, the marginal cost of funds is the derivative of the value function with respect to wealth<br />

. At the optimum, the marginal utility of consumption must equal the marginal cost,<br />

. Optimal consumption is then the inverse marginal utility evaluated at the marginal cost (the<br />

first equation in (12)). The same rationale leads to the optimal bequest function (the second equation in (12)).<br />

In contrast with the static analysis in Section 2, the optimal portfolio now has two terms. The first, (13), reflects the<br />

18<br />

(9)<br />

(10)<br />

(11)<br />

(13)<br />

(14)<br />

(15)<br />

(16)


instantaneous mean-variance trade-off in excess returns. The second, (14), is an intertemporal hedge related to<br />

stochastic fluctuations in the state variables affecting means and variances. This term reflects a concern for the<br />

future coefficients of asset returns. It is interesting to note that the intertemporal hedge vanishes if the interest rate<br />

and the market prices of risk do not depend on the state variables, and . In this case<br />

the value function does not depend on the state variables, , and the cross-partial derivatives<br />

vanish, . Another instance where hedging does not matter is when the investor has logarithmic<br />

utility and bequest functions. The value function becomes additively separable .<br />

The optimal portfolio is then entirely motivated by the risk-return trade-off and the investor displays myopic<br />

behavior.<br />

Numerical implementation of the model requires the computation of the solution of the PDE characterizing the value<br />

function. Except for low dimensional systems, this is a difficult task. Solutions, for realistic models with multiple<br />

state variables and non-linear dynamics are typically difficult, if not impossible, to calculate.<br />

Remark 1. Assume constant relative risk aversion and<br />

where is the relative risk aversion coefficient and is a deterministic subjective discount factor. The value<br />

function is multiplicatively separable , if the function satisfies certain<br />

conditions. The optimal consumption-bequest policy is and . The portfolio<br />

components become<br />

where is the column vector of partial derivative of with respect to .<br />

Under constant relative risk aversion, optimal policies are proportional to wealth. This reduces the complexity of the<br />

PDE characterizing the solution of the model. Nevertheless, even in this case, the numerical resolution becomes a<br />

challenge if the system of relevant state variables does not have a low dimensionality.<br />

4 Dynamic portfolio choice: martingale method<br />

A probabilistic approach can be used in order to overcome computational and other difficulties inherent in the PDE<br />

(dynamic programming) method. This approach relies on martingale theory tools, which were introduced in finance<br />

by Pliska (1986), Karatzas, Lehoczky and Shreve (1987), Cox and Huang (1989), Ocone and Karatzas (1991) and<br />

Detemple, Garcia and Rindisbacher (2003). It also builds on the risk neutralization concept originally introduced by<br />

Cox and Ross (1976) in the context of derivatives valuation and further developed by Harrison and Kreps (1981)<br />

and Harrison and Pliska (1981).<br />

4.1 1. Main results<br />

The core of this approach rests on the identification of the state price density implied by the complete financial<br />

market model postulated. This is the process<br />

which is a path-dependent functional of the interest rate and the market prices of risk. The state price density is the<br />

stochastic discount factor in the model at hand. It gives the value at time of a “dollar” received at time . The<br />

conditional version of the state price density, , gives the value at time of a “dollar” received at time<br />

.<br />

The maximization problem of the investor can be restated in the static form (see Pliska (1986), Karatzas, Lehoczky<br />

and Shreve (1987) and Cox and Huang (1989) for details)<br />

19<br />

(17)<br />

(18)


subject to the static budget constraint<br />

and the non-negativity constraints in (11). In the static problem, optimization is carried out over the consumptionbequest<br />

policy. The portfolio choice is a residual decision. The choice is determined by the need to finance the<br />

optimal consumption-bequest policy.<br />

Theorem 3. (Ocone and Karatzas (1991), Detemple, Garcia and Rindisbacher (2003)). The optimal consumption<br />

and bequest policies are<br />

(20)<br />

where the state price density in (17) and is the unique solution of the equation<br />

The optimal portfolio is with<br />

where , and are the absolute risk tolerance measures<br />

and evaluated at optimal consumption and bequest. Moreover,<br />

where is the Malliavin derivative process that satisfies the linear stochastic differential equation<br />

and are the gradients of with respect<br />

to .<br />

The marginal cost of consumption is the state price density adjusted by a Lagrange multiplier so as to account for<br />

the need to satisfy the static budget constraint. Thus, and the optimal consumption-bequest<br />

policies (20) follow. The equation for is then dictated by the need to saturate the budget constraint. Optimal wealth<br />

is the present value of future consumption and bequest<br />

The optimal portfolio determines the volatility of wealth. To identify it, it suffices to calculate the volatility<br />

coefficients on the left and right hand sides of (26). This can be done by applying the Clark-Ocone formula (see<br />

Ocone and Karatzas (1991) and Detemple, Garcia and Rindisbacher (2003)) and the rules of Malliavin calculus (see<br />

Nualart (1995) for a comprehensive treatment). The leads to (22)-(23).<br />

The portfolio component (22) corresponds to the mean-variance term (13). The new element is that the leading term,<br />

which was written as a ratio of derivatives of the value function in the previous formula, is now seen to represent the<br />

cost of optimal risk tolerance. The portfolio component (23) is the dynamic hedge due to fluctuations in the state<br />

variables. This dynamic hedge has two fundamental parts. The first is motivated by fluctuations in the interest rate.<br />

This is the term depending on in (24). It represents an interest rate hedge. The second is due to fluctuations<br />

in the market prices of risk. It comprises the terms in in (24) and represents a market price of risk hedge.<br />

The two hedges can be written as<br />

20<br />

(19)<br />

(21)<br />

(22)<br />

(23)<br />

(24)<br />

(25)<br />

(26)


where<br />

The Malliavin derivatives in these formulas capture the impact on of a perturbation in the Brownian motion<br />

at time . Malliavin derivatives are similar to impulse response functions used in economics and other disciplines to<br />

describe the reaction of endogenous variables to exogenous shocks.<br />

The formulas in Theorem 3 open the way to implementations for high dimensional systems of state variables. All<br />

expressions in the theorem are in explicit form as functions or functionals of the state price density and other<br />

relevant variables. In the case of the portfolio, the components are expectations of random variables which depend<br />

on the trajectories of the underlying Brownian motions. The Malliavin derivatives involved solve linear stochastic<br />

differential equations (25). Computation of the portfolio components can be performed by simulating the solutions<br />

of the relevant SDEs and calculating expectations by averaging over simulated values. 2 Monte Carlo simulation can<br />

be implemented for arbitrary number of assets and state variables. It can also be carried out for utility functions with<br />

wealth-dependent risk aversions and for non-linear state variable dynamics. 3<br />

4.2 2. Numerical example: stochastic interest rate and market price of risk<br />

A parametric model introduced in Detemple, Garcia and Rindisbacher (2003) is used for illustration of the method.<br />

State variables satisfy the stochastic differential equations<br />

(27)<br />

with<br />

The parameters are constants. The coefficients are<br />

positive and . The Brownian motion has dimension .<br />

The interest rate follows the NMRCEV process (27). This process has nonlinear mean-reversion (NMR) and<br />

constant elasticity of variance (CEV). The speed of mean reversion is the nonlinear function . The<br />

market price of risk follows the MRHEVID process (28). This process exhibits (linear) mean-reversion (MR) and<br />

has a hyperbolic elasticity of variance (HEV), given by the function<br />

Volatility converges to zero as approaches the points and . The mean also depends on the interest rate (ID)<br />

through the function (29). At the points and the interest rate dependence vanishes and is pulled toward<br />

2 An alternative is to apply a change of variables proposed by Doss (1977) in order to stabilize the volatility coefficients of the processes to be<br />

simulated (see Detemple, Garcia and Rindisbacher (2003, 2005a)). The portfolio formula can be rewritten in terms of the transformed state<br />

variables. Portfolio components can then be calculated by Monte Carlo simulation based on the transformed state variables.<br />

3 Simulation-based methods have also been proposed by Cvitanic, Goukassian and Zapatero (2003), Brandt, Goyal, Santa-Clara and Stroud (2005)<br />

and others. A review of simulation methods for portfolio choice appears in Detemple, Garcia and Rindisbacher (2008). A comparison of methods<br />

is carried out in Detemple, Garcia and Rindisbacher (2005b).<br />

21<br />

(28)<br />

(29)<br />

(30)<br />

(31)


. The MRHEVID process wanders stochastically within the band .<br />

Parameter values are displayed in Table 1. With the exception of , they correspond to estimates reported in DGR.<br />

Table 1: Parameter values<br />

Interest rate Market price of risk<br />

= 0.0027668 = 0.85576<br />

= 0.0063138 × 12 = 0.30<br />

= 37.008/(12 × 2 × 0.45432) = 3.0708<br />

= 0.45432 = 1.50<br />

= 0.154055 × 12 = 1.50<br />

= 1.1741 = 2.9417<br />

= 0.50<br />

= 2.8313<br />

Initial values are set at and . The stock volatility is . Implementation is carried out for<br />

the model with HARA utility function over terminal wealth (no intermediate consumption) where and<br />

(i.e., ). Monte Carlo estimates are calculated based on<br />

trajectories and time discretization points per year.<br />

Figure 2 shows the behavior of the optimal portfolio as initial wealth and the horizon vary. Figure 3 shows the<br />

unconstrained portfolio behavior, when terminal wealth is allowed to become negative. Both figures plot the<br />

portfolio and its components as fractions of initial wealth. The nonlinear effect of wealth is especially strong at low<br />

levels of wealth. Nonlinear horizon effects are also important.<br />

2<br />

1<br />

0<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

0.5<br />

0<br />

-0.5<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

Portfolio<br />

10<br />

10<br />

30<br />

20<br />

Horizon<br />

Interest rate hedge<br />

30<br />

20<br />

Horizon<br />

40<br />

40<br />

2<br />

1<br />

0<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

0.5<br />

0<br />

-0.5<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

Mean-variance demand<br />

10<br />

10<br />

30<br />

20<br />

Horizon<br />

Market price of risk hedge<br />

30<br />

20<br />

Horizon<br />

Figure 2: Constrained portfolio behavior for HARA utility with and . Wealth varies from to ,<br />

horizon from to . Computations are based on trajectories and time points per year.<br />

22<br />

40<br />

40


4<br />

2<br />

Unconstrained portfolio<br />

0<br />

0<br />

4000<br />

3000<br />

2000<br />

Wealth 1000 10<br />

30<br />

20<br />

Horizon<br />

40<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

0.5<br />

Unconstrained interest rate hedge<br />

0<br />

-0.5<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

10<br />

Unconstrained mean-variance demand<br />

4<br />

2<br />

-0.1<br />

-0.2<br />

30<br />

20<br />

Horizon<br />

40<br />

-0.3<br />

4000<br />

3000<br />

2000<br />

Wealth 1000<br />

10<br />

10<br />

20<br />

30<br />

40<br />

Horizon<br />

Unconstrained market price of risk hedge<br />

20<br />

30<br />

40<br />

Horizon<br />

Figure 3: Unconstrained portfolio behavior for HARA utility with and . Wealth varies from to ,<br />

horizon from to . Computations are based on trajectories and time points per year.<br />

5 Optimal portfolios and bond numeraire<br />

Additional perspective about the portfolio policy can be gained by using long term bonds as units of account. Such a<br />

change of numeraire has proved useful for pricing fixed income derivatives (Jamshidian (1989), Geman (1989)).<br />

The mathematical underpinnings and economic implications of numeraire changes are studied in Geman, El Karoui<br />

and Rochet (1995). Applications to optimal portfolios are carried out in Sorensen (1999), Lioui and Poncet (2001,<br />

2003), Munk and Sorensen (2004), Lioui (2007) and Detemple and Rindisbacher (2010).<br />

Let be the price of a pure discount bond with maturity date . By standard valuation principles .<br />

Using this bond as new unit of account entails dividing all prices by . The state price density in this bond<br />

numeraire, which is called the forward density, is<br />

The associated measure is called the forward measure. Pricing in the bond numeraire can be done directly, by using<br />

as the stochastic discount factor for cash flows received at time . Thus, the value of a payoff is ,<br />

which can also be written as where is the conditional expectation under the forward measure. It is also<br />

useful to note that the original state price density is .<br />

With this notation, optimal policies become<br />

Theorem 4. (Detemple and Rindisbacher (2010)) Optimal consumption and bequest are and<br />

. Intermediate wealth is<br />

23


The optimal portfolio is with<br />

The volatility of is , where is the volatility of the return on the pure discount<br />

bond . The conditional expectation is under the forward -measure, .<br />

By using long term bonds as units of account, the portfolio has been decomposed in three parts. The first component<br />

is the mean-variance demand, which is now written so as to highlight the impact of the term structure of bond prices.<br />

The second component is a term structure hedge. This hedge has a static flavor, as it is motivated by the<br />

instantaneous fluctuations in the term structure of bond prices. The last component is a forward density hedge. This<br />

hedge has an intertemporal flavor. It is motivated by stochastic fluctuations in the future volatilities of the forward<br />

densities (equivalently, in the future market prices of risk in the bond numeraires).<br />

The formula in Theorem 4 is useful in that it highlights the role of bonds. It immediately identifies a portion of the<br />

dynamic hedging demand in Theorem 3 as a static hedge related to instantaneous fluctuations in bond prices. This<br />

component motivates a fundamental need for bonds. 4 The formula also permits the integration of asset management<br />

and term structure models. Sophisticated models, such as those in the Heath, Jarrow and Morton (1992) or Cox,<br />

Ingersoll and Ross (1985) classes, can readily be used as backbones for asset allocation, by simply plugging the<br />

relevant forward rates or bond prices in the portfolio formula.<br />

6 Perspectives and future research<br />

The models and approaches outlined above permit asset allocation in a variety of contexts taking account of realistic<br />

features of returns. The Monte Carlo methods in Sections 4 and 5 are especially versatile and powerful. This section<br />

discusses some limitations of the methods and focuses on a few areas of future research.<br />

6.1 1. Incomplete markets and portfolio constraints<br />

The assumption that all risks are hedgeable (complete markets) is perhaps the most restrictive aspect of the models<br />

described above. Incomplete markets and portfolio constraints are financial market realities that are important for a<br />

variety of investors, including pension plans, institutional investors and investment banks.<br />

Constraints on portfolios can easily be incorporated in the dynamic programming framework by adding the relevant<br />

inequalities as additional constraints to the optimization problem. In certain rare instances the resulting partial<br />

differential equation for the value function can be solved. In most cases finding a closed form solution is elusive and<br />

a numerical method must be used. As discussed above, the approach quickly runs into a curse of dimensionality.<br />

Unless the model of interest has low dimensionality numerical resolution will prove difficult if not impossible.<br />

Constraints on portfolios can also be incorporated in the static optimization problem. Various characterizations of<br />

the solution have been proposed (see Karatzas, Lehoczky, Shreve and Xu (1991) and Cvitanic and Karatzas (1992)).<br />

Monte Carlo implementation on the basis of these characterizations, however, remains elusive. The main difficulty<br />

4 Models seeking to explain the demand for bonds include Wachter (2003) and Lioui (2007).<br />

24<br />

(32)<br />

(33)<br />

(34)<br />

(35)


is that the characterization of the solution often involves forward-backward stochastic differential equations, which<br />

are notoriously difficult to handle, both mathematically and computationally.<br />

For illustration consider the following basic model with incomplete markets<br />

where is a -dimensional Brownian motion of hedgeable risks and is a -dimensional Brownian motion of<br />

unhedgeable risks. There are risky assets and the volatility matrix is invertible. The vector of market<br />

prices of hedgeable risks is . The price of unhedgeable risks is a -<br />

dimensional vector stochastic process to be identified in the optimization problem. This price is preferencespecific<br />

and captures the shadow price of the incompleteness constraint. The state price density induced by<br />

hedgeable risks is . The density associated with unhedgeable risks is . The stochastic discount factor for valuation<br />

is .<br />

The optimal portfolio in this case can be written as<br />

with<br />

where<br />

In these expressions, represents the Malliavin derivative of the state variable vector with respect to hedgeable<br />

risks . The portfolio has a new term, , with is a hedge against (hedgeable) fluctuations in the shadow price of<br />

incompleteness. The latter is determined by the condition that it is not investable/hedgeable<br />

where are given by (42)-(43) with in place of . This is a forward-backward SDE for .<br />

Implementation of (38)-(44) for general models of the type (36)-(37) requires a numerical scheme for computing the<br />

solution of (44). 5<br />

6.2 2. Jumps<br />

Discontinuous components in asset returns have also been amply documented in the empirical literature. The recent<br />

credit crisis provides a stark reminder of the importance of jumps and the to insure against such events.<br />

5 Solutions, with non-zero shadow price, have been found for special cases (e.g., Detemple and Rindisbacher (2005, 2010)).<br />

25<br />

(36)<br />

(37)<br />

(38)<br />

(39)<br />

(40)<br />

(41)<br />

(42)<br />

(43)<br />

(44)


Models of discontinuous returns are easily formulated. A general model is<br />

where is a jump measure and is its compensator. Jump sizes are random and belong to some<br />

subset of the reals. This structure includes a variety of settings of practical relevance. In particular, it permits<br />

economy wide jumps which affect market returns as well as state variables.<br />

In models with jumps one is immediately confronted with the issue of unhedgeable risks. Jump risk as well as jump<br />

size risk may be unhedgeable. In the dynamic programming approach, the possibility of jumps introduces a nonlocal<br />

component corresponding to the expected jump in the value function, in the Hamilton-Jacobi-Bellman<br />

equation. Except for simple specifications, this term precludes closed form solutions. It also increases the<br />

computational challenges faced by standard numerical methods for PDEs. The key for the martingale approach<br />

remains the computation of the price of incompleteness.<br />

6.3 3. Preferences<br />

The vNM axioms place strong restrictions on individual behavior. The Allais (1953) and Ellsberg (1961) paradoxes<br />

are well known examples of violations of the axioms underlying expected utility theory. In intertemporal settings,<br />

expected utility imposes a direct link between attitudes toward risk and attitudes toward the passage of time. Such a<br />

restriction is difficult to motivate from an economic point of view.<br />

Various preference models have been proposed to address these and other issues. Recursive utility and its<br />

continuous time adaptation, stochastic differential utility (SDU), have become especially popular during the past two<br />

decades (see Epstein and Zin (1989) and Duffie and Epstein (1992)). Preferences in this class permit a separation<br />

between risk and time preferences, leading to a more plausible model of individual behavior. The general SDU<br />

formulation postulates<br />

as a choice criterion. In this specification, is an instantaneous aggregator and is the value function<br />

associated with the consumption-terminal wealth plan . The value function is the solution of the backward<br />

SDE (47). The classic vNM criterion, as in (9), is recovered from the linear aggregator . For<br />

general aggregators, it is not possible to write the value function in closed form, solely in terms of the consumptionbequest<br />

plan.<br />

First order conditions associated with (47) involve the gradient of the value process. Resulting characterizations<br />

of the optimal consumption plan depend on the unknown value function (see Schroder and Skiadas (1999)). Optimal<br />

portfolio formulas inherit this complexity and cannot be easily solved for or analyzed, as in the standard vNM case.<br />

As a result, little is known about the behavior of optimal consumption-portfolio policies. Efficient numerical<br />

methods for dealing with this problem are therefore of substantial theoretical and practical importance.<br />

7 Conclusion<br />

Portfolio selection has undergone a radical transformation in the last 6 decades. Modern methods provide economic<br />

insights into a range of questions pertaining to optimal behavior. They also provide the tools needed for<br />

implementation of optimal rules in contexts of economic and financial relevance. Nonlinear phenomena affecting<br />

securities returns and utility functions with wealth-dependent risk aversions can readily be handled using Monte<br />

Carlo methods. Large numbers of assets and state variables can also be incorporated in the analysis without<br />

compromising on implementability. The flexibility of these methods is of considerable interest for pension plans and<br />

26<br />

(45)<br />

(46)<br />

(47)


institutions seeking to improve the control and performance of their asset management activities.<br />

Yet, much remains to be done in certain areas. The issues discussed in the previous section constitute important<br />

avenues for future research. Another aspect that deserves attention is the question of parameter estimation. This<br />

question is critical for practical implementations. Econometric methods currently available often provide imprecise<br />

estimates for model parameters. Estimation error is a type of risk that needs to be properly accounted for in the<br />

allocation processs.<br />

8 References<br />

[1] M. Allais, “Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école<br />

Américaine,” Econometrica, 21, 1953: 503-546.<br />

[2] M.W. Brandt, A. Goyal, P. Santa-Clara and J.R. Stroud, “A simulation approach to dynamic portfolio choice<br />

with an application to learning about return predictability,” Review of Financial Studies, 18, 2005: 831-873.<br />

[3] M. Brennan, E. Schwartz and R. Lagnado, “Strategic asset allocation,” Journal of Economic Dynamics and<br />

Control, 21, 1997: 1377-1403.<br />

[4] J.C. Cox and C-f. Huang, “Optimal consumption and portfolio policies when asset prices follow a diffusion<br />

process,” Journal of Economic Theory, 49, 1989: 33-83.<br />

[5] J.C. Cox, J.E. Ingersoll and S.A. Ross, “A theory of the term structure of interest rates,” Econometrica, 53: 385–<br />

407.<br />

[6] J.C. Cox and S.A. Ross, “The valuation of options for alternative stochastic processes,” Journal of Financial<br />

Economics, 3, 1976: 145–166.<br />

[7] J. Cvitanic, L. Goukasian and F. Zapatero, “Monte Carlo computation of optimal portfolio in complete<br />

markets,” Journal of Economic Dynamics and Control, 27, 2003: 971-986.<br />

[8] J. Cvitanic and I. Karatzas, “Convex duality in constrained portfolio optimization,” Annals of Applied<br />

Probability, 2, 1992: 767-718.<br />

[9] J.B. Detemple, R. Garcia and M. Rindisbacher, “A Monte-Carlo method for optimal portfolios,” Journal of<br />

Finance, 58, 2003: 401-446.<br />

[10] J. Detemple, R. Garcia and M. Rindisbacher, “Representation formulas for Malliavin Derivatives of diffusion<br />

processes,” Finance and Stochastics, 9, 2005a: 349-367.<br />

[11] J. Detemple, R. Garcia and M. Rindisbacher, “Intertemporal asset allocation: a comparison of methods,”<br />

Journal of Banking and Finance, 29, 2005b: 2821-2848.<br />

[12] J. Detemple, R. Garcia and M. Rindisbacher, “Simulation methods for optimal portfolios,” in Handbooks in<br />

Operations Research and Management Science, 15, Financial Engineering, J.R. Birge and V. Linetsky eds.,<br />

Elsevier, Amsterdam, 2008: 867-923.<br />

[13] J. Detemple and M. Rindisbacher, “Closed form solutions for optimal portfolio selection with stochastic<br />

interest rate and investment constraints, Mathematical Finance, 15, 2005: 539-568.<br />

[14] J. Detemple and M. Rindisbacher, “Dynamic asset allocation: portfolio decomposition formula and<br />

applications,” Review of Financial Studies, 23, 2010: 25-100.<br />

[15] H. Doss, “Liens entre équations différentielles stochastiques et ordinaires,” Annales de l’Institut H. Poincaré,<br />

13, 1977: 99–125.<br />

[16] D. Duffie and L. Epstein, “Stochastic differential utility,” Econometrica, 60, 1992: 353 394.<br />

[17] D. Ellsberg, “Risk, ambiguity, and the Savage Axioms,” Quarterly Journal of Economics, 75, 1961: 643-669.<br />

[18] L.G. Epstein and S.E. Zin, “Substitution, risk aversion, and the temporal behavior of consumption and asset<br />

returns: a theoretical framework,” Econometrica, 57, 1989: 937-969.<br />

[19] H. Geman, “The importance of the forward neutral probability in a stochastic approach of interest rates,”<br />

Working Paper, ESSEC, 1989.<br />

[20] H. Geman, N. El Karoui and J.C. Rochet, “Changes of numéraire, changes of probability measures and option<br />

pricing,” Journal of Applied Probablity, 32, 1995: 443-458.<br />

27


[21] M.J. Harrison and D.M. Kreps, “Martingales and arbitrage in multiperiod securities markets,” Journal of<br />

Economic Theory, 20, 1979: 381-408.<br />

[22] M.J. Harrison and S.R. Pliska, “Martingales and Stochastic integrals in the theory of continuous trading,”<br />

Stochastic Processes and their Applications, 11, 2001: 215–260.<br />

[23] D. Heath, R.A. Jarrow and A. Morton, “Bond pricing and the term structure of interest rates: a new<br />

methodology for contingent claims valuation,” Econometrica, 60, 1992: 77-105.<br />

[24] F. Jamshidian, “An exact bond option formula,” Journal of Finance, 44, 1989: 205-09.<br />

[25] I. Karatzas, J.P. Lehoczky and S.E. Shreve, “Optimal portfolio and consumption decisions for a “small<br />

investor” on a finite horizon,” SIAM Journal of Control and Optimization, 25, 1987: 1557-1586.<br />

[26] I. Karatzas, J.P. Lehoczky, S.E. Shreve and G.L. Xu, “Martingale and duality methods for utility maximization<br />

in an incomplete market,” SIAM Journal on Control & Optimization, 29, 1991: 702-730.<br />

[27] I. Karatzas and S.E. Shreve, Brownian motion and stochastic calculus, 2nd Edition, Springer Verlag, New<br />

York, 1991.<br />

[28] A. Lioui and P. Poncet, “On optimal portfolio choice under stochastic interest rates,” Journal of Economic<br />

Dynamics and Control, 25, 2001: 1141-1865.<br />

[29] A. Lioui and P. Poncet, “International asset allocation: a new perspective,” Journal of Banking and Finance,<br />

27, 2003: 2203–2230.<br />

[30] A. Lioui, “The Asset Allocation Puzzle is still a puzzle” Journal of Economic Dynamics and Control, 31,<br />

2007: 1185-1216.<br />

[31] H. Markowitz, “Portfolio selection,” Journal of Finance, 7, 1952: 77-91.<br />

[32] R.C. Merton, “Lifetime portfolio selection under uncertainty: the continuous time case,” Review of Economics<br />

and Statistics, 51, 1969: 247-257.<br />

[33] R.C. Merton, “Optimum consumption and portfolio rules in a continuous-time model,” Journal of Economic<br />

Theory, 3, 1971: 273-413.<br />

[34] C. Munk and C. Sorensen, “Optimal consumption and investment strategies with stochastic interest rates,”<br />

Journal of Banking and Finance, 28, 2004: 1987-2013.<br />

[35] D. Nualart, The Malliavin Calculus and Related Topics, Springer Verlag, New York, 1995.<br />

[36] D. Ocone and I. Karatzas, “A generalized Clark representation formula, with application to optimal portfolios,”<br />

Stochastics and Stochastics Reports, 34, 1991: 187-220.<br />

[37] S. Pliska, “A stochastic calculus model of continuous trading: optimal portfolios,” Mathematics of Operations<br />

Research, 11, 1986: 371-382.<br />

[38] M. Rothschild and J.E. Stiglitz, “Increasing risk I: a definition,” Journal of Economic Theory, 2, 1970: 225-<br />

243.<br />

[39] M. Schroder and C. Skiadas, “Optimal consumption and portfolio selection with stochastic differential utility,”<br />

Journal of Economic Theory, 89, 1999: 68-126.<br />

[40] C. Sorensen, “Dynamic asset allocation and fixed income management,” Journal of Financial and Quantitative<br />

Analysis, 34, 1999: 513-531.<br />

[41] J. Wachter, “Risk Aversion and Allocation to Long-term Bonds,” Journal of Economic Theory, 112, 2003:<br />

325–333.<br />

28


ENHANCING FILTERED HISTORICAL SIMULATIONS<br />

Giovanni Barone-Adesi<br />

The Swiss Finance Institute at USI<br />

AFE, Samos, July 1 2011<br />

Abstract. Filtered historical simulation (FHS) is a powerful tool to simulate returns of complex<br />

portfolios. Assets can follow different distributions. Time dependencies are accounted by filtering,<br />

while cross dependencies are preserved by the choice of simultaneous standardized residuals. This last<br />

procedure does not account for the stronger clustering of returns observed across assets in very<br />

negative days. We discuss a new procedure to introduce this effect without hampering the nonparametric<br />

nature of our simulation procedure.<br />

The accurate simulation of security returns is a necessary step for the valuation and hedging<br />

of securities. In fact security returns do not conform generally to the simplifying assumptions<br />

of closed-form models, such as normal diffusion. Often the simulation task is complicated by<br />

the need to simulate well the distribution of security returns consistent with current market<br />

conditions, such as short-term volatility, while ensuring convergence to the long-term<br />

distribution of security returns implied by long-dated contingent claims.<br />

FHS. A method to simulate security returns accurately has been proposed by Barone-<br />

Adesi in several papers with Bourgoin, Giannopoulos and Vosper (1998, 2000, 2001, 2002).<br />

Their application was originally to risk-management, though later an empirical change of<br />

measure allowed Barone-Adesi, Engle and Mancini (2008) to price index options.<br />

The original filtered historical simulation (FHS) was based on the following assumptions:<br />

1) Returns are generated by a time-series process of the GARCH family<br />

2) The parameters of the time-series process can be estimated through the minimization<br />

of squared errors with little loss of efficiency from the true likelihood function.<br />

3) Correlations across assets follow a multivariate process with outcomes independent of<br />

the scale of returns. As a special case, correlations may be constant.<br />

Assumption (1) implies that standardized returns are stationary.<br />

Assumption (2) implies that usual estimation procedures can be used.<br />

Assumption (3) implies that scale changes do not prevent the use of parallel bootstrapping in<br />

generating strips of simulated security returns across assets.<br />

Our last assumption may be mis-specified because there is some empirical evidence (see<br />

Embrechts for a summary) that clustering of negative returns may increase with scale. If that<br />

is the case, our parallel bootstrapping is unable to adjust correlations to scale, making our<br />

simulated distribution different from the original one. Note however that larger simulated<br />

returns tend to be generated from larger standardized residuals and volatilities. These<br />

variables show common clustering in time. Therefore our simulated returns preserve the<br />

higher correlation found in larger observed returns, though they may fail to reflect the<br />

relationship between size of returns and correlation accurately. The original FHS<br />

29


extrapolation process may be considered a first order approximation of the return distribution<br />

under these circumstances.<br />

The FHS approach can be adapted to stress testing because it simulates the whole<br />

distribution of security returns. It is not limited to the observed returns as regular<br />

bootstrapping is. Therefore it is possible to sample from more extreme points in the tails of<br />

the multivariate distribution by increasing the number of simulation runs. As an example,<br />

applying FHS over a 10-day horizon using a database of 500 daily returns, the number of<br />

different possible pathways is 500 10 for any given set of initial conditions. Therefore it is<br />

possible to generate an almost arbitrarily large number of points of the simulated distribution.<br />

This is especially important for portfolios of derivatives that may experience more stress at<br />

points not on the tails of the distribution.<br />

Finding the most stressful conditions for a given portfolio may be slow, because the<br />

probability of each pathway is the inverse of the number of pathways. Of course the problem<br />

of finding the most stressful conditions is easier for linear portfolios, for which the most<br />

stressful conditions are given by the largest negative or positive returns, that can be preselected<br />

in the simulation procedure.<br />

Enhanced FHS. A limitation on the accuracy of FHS, that cannot be remedied by<br />

increasing the number of simulation runs, stems from the neglect of the stronger clustering<br />

often observed in empirical asset returns. Multivariate copulas of asset returns are not<br />

Gaussian, even after they are standardized by dividing them by their time-changing standard<br />

deviation.<br />

Multivariate copulas are unfortunately cumbersome to calibrate, because of very low<br />

probabilities associated with each cell far out in the tails. Moreover simulating returns<br />

consistent with them would require multivariate random number generation that follows the<br />

chosen copula, a cumbersome task of its own. Remember that FHS requires only selecting a<br />

date within the historical data base at each step of the simulation.<br />

Is it possible to improve the accuracy of FHS while retaining most of its simplicity? In<br />

principle an obvious strategy could be to sample only standardized returns associated with the<br />

current level of market volatility. This way strips of returns across assets in high volatility<br />

days will reflect the clustering typical of those days, rather than the average correlation across<br />

all the days in the historical data.<br />

The above strategy may work only with extremely long series of historical data. The<br />

reason is that the above scheme effectively partitions the historical data into volatility<br />

buckets. Only one of them is available to draw simulated returns at each given day. However,<br />

as Pritsker (2001) shows, FHS becomes fairly unreliable if only few historical returns are<br />

available. With daily returns, ‘few’ is of the order of 500.<br />

If we require that each volatility bucket contains 500 observations, we cannot use many<br />

buckets of daily data. Otherwise our stationarity assumptions may be stretched. Empirical<br />

investigations in progress (Barone-Adesi, Mira and Solgi, in progress) suggest that two<br />

buckets may be the optimal compromise. Therefore we sample each day from the high or the<br />

low volatility bucket of standardized returns, depending on our computed GARCH volatility.<br />

30


This technique has been backtested on equally weighted portfolios of two stocks of DJIA<br />

index (we have tested all possible pairs of components of this index), and daily VaR at 99%,<br />

95%, and 90% confidence intervals are estimated. Our results show that the new proposed<br />

technique (by using different joint distributions for the innovations in the low and high<br />

volatility regimes) improves the VaR estimates.<br />

References<br />

Barone-Adesi G., F. Bourgoin and K. Giannopoulos ‘Don’t Look Back”, Risk, August 1998.<br />

Barone-Adesi, G., K. Giannopoulos and L. Vosper ‘VaR without Correlations for Portfolios<br />

of Derivative Securities’, Journal of Futures Markets, August 1999.<br />

Barone-Adesi, G., K.Giannopoulos, and L.Vosper, 2001. ‘Backtesting derivative portfolios<br />

with filtered historical simulation’, European Financial Management 8 (1), 31–58.<br />

Barone-Adesi, G., Giannopoulos, K., 2002. ‘Non-parametric VaR techniques; myths and<br />

realities’, Economic Notes 30 (July), 167–181.<br />

Barone-Adesi, G., Engle R. F., Loriano M., 2008, ‘A GARCH Option Pricing Model with<br />

Filtered Historical Simulation’, The Review of Financial Studies 21, 1224-258.<br />

Embrechts, P., F. Lindskog, and A. McNeil (2003) ‘Modelling Dependence with Copulas<br />

andApplications to Risk Management’. In: Rachev, S.T., ed., Handbook of Heavy Tailed<br />

Distributions in Finance, Elsevier: North Holland, pp. 329-384.<br />

Pritsker M., The Hidden Dangers of Historical Simulation’, Finance and Economics<br />

Discussion Series 2001-27. Washington: Board of Governors of the Federal Reserve<br />

System, 2001.<br />

31


International propagation of the credit crisis*<br />

Richard A. Brealey, Ian A. Cooper, and Evi Kaplanis**<br />

Original version: November 2010<br />

This version: April 2011<br />

Abstract<br />

In this study we examine the propagation of the recent crisis to banks outside the US.<br />

We develop a framework combining stock market and structural variables that can be<br />

used with both individual bank and country aggregate data. We find that the<br />

differential incidence of the crisis measured by share price impact is explained by<br />

prior correlation with the US banking sector, bank leverage and liability structure, the<br />

foreign assets of banks, and the importance of banking in the economy. We find that a<br />

simple measure of bank capital was a better predictor of crisis impact than the riskweighted<br />

measure of Basel II, but do not find that banks were penalized for making<br />

aggressive use of Basel II rules. These results are robust to various specifications and<br />

whether we use country data or individual banks. Using this framework we test a<br />

number of hypotheses which have been put forward in other studies. We find that<br />

some results are sensitive to sample selection and test specification. We do not find<br />

evidence that the incidence of the crisis was associated with mortgage holdings, stock<br />

market returns prior to the crisis, or standards of governance. We do find that<br />

countries with higher prior GDP growth suffered less in the crisis. We discuss the<br />

implications of our results for bank regulation.<br />

JEL Codes: F33, F36, G1, G15, G18, G21, G28, G32, G34.<br />

Keywords: banking, financial institutions, capital structure, risk management,<br />

international finance, monetary economics, credit crisis, international transmission,<br />

contagion, financial regulation.<br />

* We thank participants at seminars at CORE/University of Louvain, the University of<br />

Luxembourg, and the University of Southern Denmark. This paper can be<br />

downloaded from: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1712707.<br />

** All authors at London Business School, Sussex Place, Regent’s Park, London<br />

NW1 4SA, England, +44-207-000-7000, icooper@london.edu (corresponding<br />

author), rbrealey@london.edu, ekaplanis@london.edu.<br />

32


1. Introduction<br />

The recent financial crisis has provided a natural experiment with which to test<br />

hypotheses about crisis origination, propagation, and incidence. This has led to a<br />

vigorous debate about the factors which caused some banks and countries to suffer<br />

more than others. As well as having implications for modelling crises, this debate has<br />

important practical implications for the design of regulation and economic<br />

management. Some studies even appear to show that features of banks and economies<br />

which are favorable in normal times led to worse outcomes during the crisis. Hence it<br />

is important to understand as completely as possible the mechanism which caused<br />

these relationships.<br />

There have been two broad approaches to examining the impact of the crisis: using<br />

country aggregates and using individual banks. At an aggregate level Rose and<br />

Spiegel (2009, 2010) fail to find a relationship between cross-country variation in the<br />

impact of the crisis and variables measuring cross-country trade and financial<br />

linkages. Frankel and Savelos (2010), using more measures of crisis incidence and a<br />

different measurement period, obtain somewhat stronger results. In line with previous<br />

crises, they find that the level of central bank reserves and real exchange rate<br />

overvaluation are significant indicators of the cross-country impact of the crisis. In<br />

addition, lower past credit growth, larger current accounts and savings rates, and<br />

lower external and short-term debt were associated with lower crisis incidence,<br />

although these results are not robust across different crisis incidence measures and<br />

specifications. Lane and Milesi-Ferretti (2010) focus on the impact of the crisis on<br />

GDP and find that growth during the crisis was lower in countries with high GDP per<br />

capita, high pre-crisis growth, and larger current account deficits.<br />

At an individual bank level Acharya, Pedersen, Philippon, and Richardson (2010)<br />

develop an approach based on pre-crisis stock price data. They propose a measure<br />

equal to the return on a bank stock in the worst 5% of weeks for the index return<br />

during the pre-crisis period. They combine this with a measure of leverage to give an<br />

indicator which explains a significant amount of the share price impact of the crisis on<br />

different US financial institutions. In contrast Beltratti and Stulz (2009) focus on<br />

structural variables which measure characteristics of the banks and of regulatory and<br />

governance regimes, rather than share price variables. They examine the factors which<br />

explain the differential share price returns during the crisis for large banks from a<br />

number of countries, including the United States. They conclude that “Overall, our<br />

evidence shows that bank governance, regulation, and balance sheets before the crisis<br />

are all helpful in understanding bank performance during the crisis.”<br />

Other studies have examined specifically the role of multi-national banks in the<br />

transmission of the recent banking crisis. Some of these (Popov and Udell (2010),<br />

Navaretti et al. (2010), and Allen, Hryckiewicz, and Kowalewski (2010)) have looked<br />

at the general issue of whether foreign-owned banks serve as a stabilising influence<br />

and what causes them to adjust their activity in the host country. They show that the<br />

activities of the bank in the host country are affected by characteristics of the parent<br />

bank, such as its fragility, its losses on financial assets, and its reliance on interbank<br />

borrowing.<br />

33


The above studies focus on the recent crisis. There is in addition a more general<br />

literature on the transmission of banking crises both domestically and cross-border.<br />

Of most direct relevance here is the body of work that views transmission as a<br />

consequence of linkages in financial institutions or investor portfolios. 1 For example,<br />

Allen and Gale (2000) show how financial crises can spread as a result of the impact<br />

on the interbank market of changing demands for liquidity. In this case, the degree to<br />

which particular regions are affected by a crisis in one region depends on the<br />

particular structure of the linkages between regions. Liquidity shocks can also work<br />

directly through financial markets if an increase in demand for liquidity obliges<br />

investors to reduce their exposure in a number of markets (e.g., Calvo (2005) and<br />

Yuan (2005)).<br />

Such linkages imply that the extent to which a shock is transmitted to another region<br />

depends on the structure of the assets and liabilities of financial institutions or their<br />

shareholders. Moreover, the resulting financial contagion is characterized by shifts in<br />

the degree of comovement between bank values, so that the severity and pattern of a<br />

crisis cannot be predicted simply from the comovement in non-crisis periods. 2<br />

However, the role of international linkages in the recent crisis is unclear. Lane and<br />

Milesi-Ferretti (2010) find that measures of international linkages such as trade<br />

openness have little explanatory power with respect to differential crisis impact. Rose<br />

and Spiegel (2010)) reach the counterintuitive conclusion that “if anything, countries<br />

seem to have benefited slightly from American exposure.”<br />

In this study we contribute to this literature in three main ways. First, we use a<br />

combination of share price and structural variables to explain the impact of the crisis<br />

for non-US countries and banks. We find that a stock market measure of international<br />

linkages, the pre-crisis correlation of a foreign bank’s share return with the US bank<br />

share return index, explains a significant amount of cross-country and cross-bank<br />

differences in crisis impact. In common with contagion studies, we also find that the<br />

relationship between share returns changed during the crisis and that the exposure of<br />

banks to the crisis is related to structural variables. The variables we find to be<br />

important measure the leverage, liability structure, international holdings, and size of<br />

a country’s banking system. Hence both stock market and structural variables need to<br />

be combined to give a more complete specification of the relationships that caused<br />

differential crisis impacts. Omitting either could lead to misidentification of the<br />

causes.<br />

Second, we test our hypotheses using data for both individual banks and country<br />

indexes. The framework we use is linear in the characteristics of individual banks, and<br />

therefore gives an aggregate measure of systemic risk that is consistent with the<br />

measures for individual banks. Hence it could be used to measure the contribution of<br />

individual banks to country-wide systemic risk. We find that the results for our main<br />

predictive variables are similar for both individual-bank and country-index data.<br />

However, we find that results for some other variables are sensitive to data<br />

availability and sample selection.<br />

1 For a review of the literature on the transmission of financial crises, see Allen and Gale (2007).<br />

2 There is a large empirical literature on the issue of whether correlations between markets increase<br />

during crisis periods. See, for example, Bennett and Kelleher (1988); King and Wadhwani (1990);<br />

Wolf (2000), Forbes and Rigobon (2002), and Corsetti et al (2002).<br />

34


Third, we test a number of hypotheses that have been supported by other studies. For<br />

example, it has been suggested that high exposure to the crisis was associated with:<br />

1. aggressive use of the Basel rules (IMF (2008);<br />

2. high pre-crisis levels of GDP growth (Lane and Milesi-Ferretti (2010));<br />

3. high pre-crisis share returns and good governance (Beltratti and Stulz (2009));<br />

4. economic development (Lane and Milesi-Ferretti (2010);<br />

5. low international linkages (Lane and Milesi-Ferretti (2010), Rose and Spiegel<br />

(2010)).<br />

All these have important implications for policy but are in several cases<br />

counterintuitive. The different empirical approach and different sample of our study<br />

enables us to test the robustness of these propositions.<br />

Our approach is most closely related to Beltratti and Stulz (2009) but it differs from<br />

theirs in several important respects. Our framework uses the knowledge that the crisis<br />

was transmitted from the US to other countries so we include a stock market measure<br />

of the linkage between banks and the US banking sector, which we find to be highly<br />

significant. Since our focus is on the performance of non-US banks, we collect a<br />

much broader sample of these banks and use both individual-bank and country-<br />

aggregate data. Our country sample consists of 50 non-US countries and our sample<br />

of individual banks includes 381 non-US firms. The combination of individual-bank<br />

and country data allows us to test whether the individual bank results are robust at the<br />

aggregate level.<br />

Our study is related to Acharya et al (2010), in that we use a measure based on<br />

comovement between share prices as a primary variable. However, our interest is<br />

international propagation rather than domestic US impact, and we use correlation<br />

rather than their measure of extreme comovement. We examine which measure<br />

performs better in the international context of our study and find that the correlation is<br />

significantly better. We also combine the stock price measure with structural<br />

variables, and find that these improve the cross-sectional prediction relative to the<br />

stock market variable used alone.<br />

2. Measuring the international propagation of the crisis<br />

A full test of Allen and Gale (2000) or similar models would require detailed<br />

information on the complex linkages between banks. Our interest is to develop a<br />

parsimonious representation in which the incidence of the crisis can be explained by a<br />

small number of observable variables. We derive our empirical approach from<br />

knowledge of the mechanism that led to the development of the crisis. This concerns<br />

the banking sector of a country and its links to the US banking sector. We do this<br />

because:<br />

1. The crisis was primarily a banking-sector crisis which then spread to the<br />

remainder of the economy. So we look for transmission via the banking<br />

sectors of different countries.<br />

2. The crisis originated in the US, so its propagation should depend on<br />

linkages with the US.<br />

35


We consider a crisis emanating from country O concentrated in industry K. In this<br />

paper the country of origination is the US and the industry is the banking industry. We<br />

assume that the propagation of the crisis takes place via links between firms that are<br />

members of industry K in different countries. Initially, we describe these links by the<br />

relationship that exists in normal times between industry K in country j and industry<br />

K in country O. We measure this by the regression of equity index returns of industry<br />

K in country j on the industry stock index return in country O:<br />

R = a + b R + e<br />

(1)<br />

j j j O j<br />

t t t<br />

Where: j = 1,..J, e ∼ N σ , R ∼ N σ ,<br />

j<br />

2<br />

t (0, j )<br />

O<br />

2<br />

t (0, O)<br />

j<br />

Rt is the return on the stock index of<br />

industry K in country j in period t, and R Ot is the return on the stock index of industry<br />

j<br />

K in country O in period t. The key parameter in this regression is b , which<br />

measures the responsiveness of industry K in country j to industry K in the country of<br />

j<br />

origin of the crisis. In normal times we have the standard expression for b :<br />

j j j O<br />

b = ρ σ / σ<br />

(2)<br />

j<br />

where ρ is the correlation between R and O<br />

R .<br />

j<br />

t<br />

We assume that a crisis occurs in the period ( T, T + τ ) . We model the propagation of<br />

the crisis by the relationship between the total return on the shares of industry K in<br />

different countries during this crisis period. If the crisis period were simply a scaledup<br />

version of a normal period, the cross-sectional relationship between returns would<br />

R :<br />

be the one resulting from equation (1) with a constant term substituted for O<br />

t<br />

Where:<br />

R = a + Rb + e<br />

(3)<br />

j j j j<br />

C C<br />

j<br />

C<br />

t<br />

R is the return on industry K in country j during the crisis period ( T, T + τ ) ,<br />

R = R is the return on industry K in country O during the crisis period, and<br />

j<br />

C<br />

O<br />

C<br />

(0,<br />

2<br />

j )<br />

e ∼ N σ τ .<br />

Equation (3) describes the cross-sectional relationship we would expect to hold in a<br />

j<br />

non-crisis period. It is heteroskedastic, so we scale by the standard error of e to give:<br />

r = ψ + θρ + u<br />

(4)<br />

j<br />

C<br />

j j<br />

j j j<br />

O j<br />

Where: rC = RC / σ , θ = R / σ , u ∼ N(0, τ ) . Equation (4) says that in non-crisis<br />

times the normalised return in country j, conditional on the return in country O, is<br />

proportional to its correlation with country O.<br />

In a crisis period, however, the same relationship may not hold. We model the<br />

difference between a crisis period and a non-crisis period by making the parameter θ<br />

depend on other variables which measure the transmission mechanism of the crisis:<br />

36


j j<br />

θ = θ ( X )<br />

(5)<br />

j<br />

Where X is a vector of variables which measure the vulnerability of country j to the<br />

crisis. The model then becomes:<br />

j j j j<br />

r = ψ + θ ( X ) ρ + u<br />

(6)<br />

C<br />

If the variables have an additive effect, equation (6) becomes:<br />

r = ψ + ψ X + .. + ψ X + ψ ρ + u<br />

(7)<br />

j j j j j<br />

C 0 1 1 n n n+<br />

1<br />

Equation (7) is the specification we use in the study.<br />

3. Data<br />

We test our hypothesis with two data sets. The first consists of aggregate banking<br />

data for a sample of 50 countries. We then conduct a similar series of tests using data<br />

for a sample of nearly 400 individual banks. This provides a substantial increase in<br />

sample size but at the possible cost of more noisy data. The appendix lists the data<br />

definitions and sources.<br />

Banks differ considerably in the presentation of their accounts. Therefore, any<br />

database of balance-sheet variables encounters an inevitable problem of consistency.<br />

This problem is likely to be particularly severe in a cross-country study and adds to<br />

the noise in the data for both our country-level tests and those for individual banks.<br />

3.1 Country-Level Data<br />

For the country-level tests we include all countries for which the following data are<br />

available: (1) Datastream bank industry equity return indices for the period January<br />

2005 to March 2009; (2) IMF International Financial Statistics data covering banking-<br />

sector Total Assets, Foreign Claims, Demand Deposits, and Time Deposits for the end<br />

of 2006; (3) IMF aggregate capital ratios for the banking sector; (4) IMF aggregate<br />

Basel regulatory capital ratios for the banking sector. Our sample includes the 50<br />

countries shown in Table 1. These cover 91% of world GDP excluding the US (using<br />

IMF data for 2009).<br />

j<br />

We measure the impact of the crisis, R C , as the average weekly return on Datastream<br />

bank industry equity indices in the period 21 May 2007 to 9 March 2009, a period in<br />

which the U.S. index of bank returns fell by 79%. 3 We measure the independent<br />

variables from the period before the crisis. We use 2 years of weekly data from<br />

January 2005 to December 2006 to compute the correlation between a country’s bank<br />

j<br />

stock index and that of the US, ρ , and the standard deviation of each country’s bank<br />

j<br />

stock index return in non-crisis times, σ . We use the same data to calculate the<br />

3 We also ran both the country and individual bank regressions with the cumulative return May 2007-<br />

March 2009 as the dependent variable. The results were qualitatively similar.<br />

37


extreme value measure suggested by Acharya et al. We leave a gap between this<br />

period of data measurement and the crisis period to ensure that our independent<br />

variables would have been known by May 2007 and to allow for uncertainty about the<br />

exact dating of the crisis. To measure banking variables (which are included in the<br />

j<br />

vector of explanatory variables, X ) we use primarily IMF aggregate data for a<br />

country’s banking sector at the end of 2006. 4 We use the data for “Other depository<br />

corporations”, which are largely banks. This and other definitions, together with data<br />

sources for the country-level data are given in the appendix.<br />

The banks included in the Datastream Indices all have publicly listed stock, whereas<br />

the IMF data include government-owned banks and cooperatives. If publicly owned<br />

banks have different characteristics, this may affect our results. The sample size in<br />

such a study is naturally limited. However, the excluded countries are generally tiny<br />

and it is not clear that they would add much extra information to the study.<br />

Given the limited number of observations, we take care to use only a few independent<br />

variables and not to engage in data-mining. We select those variables that could play a<br />

significant role in the transmission mechanism of the crisis from the US banking<br />

sector to the banking sector of country j. Our hypothesised variables are:<br />

• Leverage of the banking sector, which we measure by a capital ratio;<br />

• Fragility of the banking sector, which we measure by the proportion of bank<br />

liabilities that are not deposits. We hypothesise that these represent less<br />

permanent sources of funding;<br />

• Fragility as measured also by the use of derivatives;<br />

• International linkages not captured by the correlation with the US, which we<br />

measure by the proportion of foreign assets held by banks;<br />

• The importance of the banking sector in the economy, which we measure by<br />

the ratio of bank assets to GDP.<br />

We augment these variables with four supplementary measures that have been<br />

suggested in other studies as associated with crisis returns. These include two<br />

governance measures, the prior return on bank stocks, and the prior growth in GDP.<br />

Previous empirical studies suggest that these measures were negatively related to bank<br />

performance during the crisis.<br />

The variable that presents the most difficulty in measurement is the fragility of the<br />

banking system. We estimate this in two ways. The first is the amount of less<br />

permanent sources of funding. We hypothesize that time deposits and, to a lesser<br />

extent, demand deposits are likely to represent more permanent sources of funding<br />

and involve fewer linkages to other banks. Other liabilities are likely to represent<br />

sources of funding that are more susceptible to early flight risk as a banking crisis<br />

emerges. Our second measure of fragility is the use of derivatives. We hypothesize<br />

that large derivatives positions are likely to make the banking system more fragile,<br />

both because of the implicit leverage that derivatives contain and because derivatives<br />

may serve to transmit risk internationally to other banks.<br />

We were not able to locate a reliable country-level measure of the ratio of bank equity<br />

to total assets. For our country study we therefore use two other measures of leverage<br />

4 Banking data for Taiwan are taken from the Central Bank’s Website.<br />

38


– (1) the ratio of capital to total assets, where capital includes both equity and<br />

subordinated long-term debt, and (2) the Basel ratio of Tier 1 + 2 capital to riskweighted<br />

assets. Both measures were taken from IMF Global Financial Stability<br />

Report.<br />

We include a measure of the size of the banking sector relative to GDP. A large<br />

banking sector is more likely to have international linkages. In addition, it may cause<br />

other problems in the economy as the crisis evolves. In this case the crisis could be<br />

accentuated by feedback between the banking sector and the other parts of the<br />

economy.<br />

An OECD report argues that “the financial crisis can be to an important extent<br />

attributed to failures and weaknesses in corporate governance arrangements”<br />

(Kirkpatrick (2008)). Beltratti and Stulz (2009) test this assertion but in contrast to<br />

the OECD they find that banks with more shareholder-friendly boards performed<br />

worse during the crisis. We check whether this is also true for our sample. We use<br />

two country-level measures of corporate governance, both described in Djankov et al<br />

(2008) and reported in detail in Djankov et al (2005). The first is their anti-selfdealing<br />

index, which measures the degree of protection that each country provides<br />

against a specified tunnelling transaction. The second measure is their revised index<br />

of anti-director rights, which updates and extends La Porta et al. (1998). Both<br />

variables are available for 45 of our countries.<br />

Finally, we examine two other measures that previous studies have found to be<br />

associated with crisis returns. The first is the return on the country’s bank stocks<br />

during 2006 and the second is the average rate of GDP growth in the five years to<br />

2006.<br />

In our regressions we transform those balance-sheet ratios that we expect to have a<br />

positive association with returns by subtracting them from 1.0. Therefore, the<br />

predicted sign on the coefficient is negative for each of these independent variables.<br />

Table 1 provides some summary data for the dependent variable. Of the 50 countries<br />

only China’s banking sector experienced a positive return from May 2007 to March<br />

2009. There are, however, some regional patterns in the data. European banks<br />

experienced unusually sharp falls in value, with Ireland, the most affected country,<br />

experiencing a mean weekly return of -2.6%. Emerging and developing economies<br />

generally fared better with a mean raw return of -0.6% per week, compared with a<br />

mean return of -1.2% for advanced economies. 5 There is no relationship between the<br />

severity of the declines in prices and the prior variability of returns; some of the<br />

apparently most stable banking markets suffered the sharpest falls in value.<br />

The distribution of the variables and their correlations are given in Tables 2 and 3.<br />

The simple correlations between the standardized return and our main independent<br />

variables (column 1) all have the predicted negative sign. The correlation between the<br />

standardized return and each of the governance variables is close to zero. However,<br />

the standardized return is quite strongly positively correlated with the prior growth in<br />

GDP.<br />

5 We use the definitions in the IMF’s World Economic Outlook.<br />

39


3.2 Individual Bank Data<br />

Our sample of individual banks consists of the components of the Datastream World<br />

Bank Index in 2010. Since this list is subject to potential survivorship bias, we<br />

supplemented it by merging it with the 200 largest banks by total assets in 2006,<br />

based on The Banker’s 2007 listing of the top 1000 banks at the end of the previous<br />

year. We exclude those banks whose stocks were first listed after the start of 2005.<br />

The remaining sample includes companies that are not principally commercial banks.<br />

For example, some are bancassurance companies, investment banks, or asset<br />

managers. We exclude three cases where a bank also acts as the central bank, but<br />

otherwise do not attempt to make what would be inevitably judgmental exclusions.<br />

The result is a sample of 381 banks from 50 countries. 6 In contrast to our countrylevel<br />

data, the sample does not include banks from Bulgaria, or Slovenia, but does<br />

include banks from Bahrain and Iceland.<br />

The returns and balance-sheet data for individual banks are taken from Datastream,<br />

and are supplemented by data from Osiris and the banks’ annual reports. The<br />

dependent variable is the average weekly return on the bank stock from May 2007 to<br />

March 2009. We normalize this return by dividing by the standard deviation of<br />

returns in the period 2005-2006. Where a bank was acquired for stock we include the<br />

subsequent return on the stock of the acquiring company. Where a bank was<br />

nationalized or acquired for cash, we include the cash payment and assume a zero<br />

return in the subsequent weeks. 7<br />

Definitions of the independent variables are shown in the Appendix. The first group<br />

of variables in the table largely parallel those used in the country-level regressions.<br />

For individual banks we consider three measures of leverage -- the ratio of equity to<br />

total assets, the ratio of total capital to total assets, and the Basel Tier 1 + 2 capital<br />

adequacy ratio. Where available, we collect separate measures of demand and time<br />

deposits. However, many banks report only total deposits and therefore we use this<br />

measure as an alternative independent variable.<br />

The amount of foreign loans is available for only a small proportion of our sample.<br />

Therefore, as in the country-level regressions, we use the country-average ratio of<br />

foreign loans to assets. We also use the same measure of the ratio of banking assets to<br />

GDP that we use in the country-level regressions.<br />

In addition to our principal independent variables, we also examine four other balance<br />

sheet variables that may provide information about the bank’s exposure to the crisis.<br />

The first is the ratio of the bank’s short-term debt to total assets. Thus instead of<br />

measuring fragility by the proportion of bank funding that is not provided by deposits,<br />

we use the proportion that is funded by short-term debt. We predict that a bank that<br />

relies on short-term wholesale funding will be more susceptible to the crisis. 8<br />

6 Of these banks 360 were members of the Datastream indices and 21 were added from The Banker.<br />

7 This assumption is largely immaterial. Payments for banks that were rescued by acquisition or<br />

nationalization were generally either zero or very small.<br />

8 Osiris provides a measure of money-market funding. This is available for a smaller sample and does<br />

not offer any improvement over the Datastream measure.<br />

40


Second, real-estate loans have been a common source of banking crises (Herring and<br />

Wachter (1999) and Reinhart and Rogoff (2008)) and more specifically played a<br />

leading role in the 2007 crisis. To the extent that banks were exposed to the US real-<br />

estate market or that there was contagion across countries in real-estate markets, we<br />

expect banks with a high real-estate exposure to be more sensitive to the crisis in the<br />

US. Datastream provides data on the level of mortgage loans for a substantial<br />

subsample of our banks, but data on holdings of mortgage-backed securities are<br />

available for just a small subsample. We therefore look only at the ratio of mortgage<br />

loans to total assets.<br />

Finally, a bank’s involvement in the interbank market may serve as a proxy for its<br />

international linkages. We therefore also collect data on interbank loans (an asset),<br />

and loans due to other banks (a liability).<br />

We again augment these measures with three supplementary variables that previous<br />

studies suggest may be associated with bank performance during the crisis. These are<br />

a measure of corporate governance, for which we use the Corporate Governance<br />

Quotient (CGQ®), the average weekly return on the bank stock in 2006, and the<br />

growth in country GDP over the five years ending in 2006.<br />

As in the case of the country data, we restate the independent balance-sheet variables,<br />

so that the predicted coefficient on each is negative. Tables 4 and 5 summarize the<br />

distributions of the variables used in the individual bank regressions and their<br />

pairwise correlations. The pervasive nature of the crisis is illustrated by the fact that<br />

the mean return was negative for over 90% of the observations, with a mean weekly<br />

return of -0.7%. Again there was a substantial difference between the performance of<br />

banks in emerging and developing economies and those in advanced economies. In<br />

the former case the mean weekly raw return was -.4% and in the latter case it was<br />

-.9%.<br />

The simple correlations between the standardized return and the main independent<br />

variables all have the predicted negative sign. Both the governance variable and the<br />

prior stock return are weakly negatively correlated with the standardized crisis return,<br />

while the growth in GDP continues to be quite strongly positively correlated. In the<br />

case of the independent variables, there is a high correlation between the three<br />

measures of bank capital and, not surprisingly, between total deposits and time<br />

deposits. There is a strong negative correlation between short-term debt and total<br />

deposits.<br />

The correlation between returns on the bank and the US banking index seeks to pick<br />

up linkages between banks that are not captured by other independent variables such<br />

as the relative amount of interbank activity or the size of the banking sector. The<br />

weak association between these balance-sheet variables and the return correlation<br />

suggests that the former may not be adequate proxies of bank linkages.<br />

4. The role of correlation and leverage<br />

4.1 Can pre-crisis correlation explain the propagation of the crisis?<br />

41


Equation (4) says that the prior correlation with the US banking sector should help to<br />

predict the impact of the crisis. Therefore, we first test whether the relationships that<br />

hold in normal times can explain the cross-sectional impact of the crisis. The first<br />

column of Table 6 shows the regression of the standardized crisis return variable on<br />

the pre-crisis correlation with the US banking sector. Panel A is for the country<br />

sample, Panel B for the full sample of individual banks, and Panel C for the restricted<br />

sample of individual banks for which we have all three measures of leverage. The<br />

adjusted R 2 ’s are .27, .23, and .21, suggesting that the impact of the US credit crisis<br />

on other countries was related to the pre-existing correlation between their banking<br />

sectors.<br />

The fact that this is an incomplete explanation is not surprising. Even in a domestic<br />

context in normal times this regression would not explain a large part of the crosssectional<br />

dispersion of returns. 9 Moreover, if contagion depends on the particular<br />

structure of interbank linkages, then the pattern of comovement during a crisis period<br />

may differ from that in the pre-crisis period. We, therefore, tested whether the<br />

correlations with the US bank index are stable between the two periods. In the case of<br />

9 countries, or 18% of the sample, we can reject at the 5% level the hypothesis of no<br />

change in the underlying correlations.<br />

Figure 1 plots the normalized return against the correlation for the country sample. In<br />

the raw data the average return on the bank indexes for emerging markets was higher<br />

than for advanced economies. This difference is reduced, but not eliminated, when<br />

we allow for the differing correlations with the US index. However, we show later<br />

that it almost entirely disappears once we allow for differences in the structure of the<br />

countries’ banking systems.<br />

4.3 Bank capital and crisis propagation<br />

Many regulatory recommendations focus on the role of bank capital in preventing<br />

banking crises. Therefore, we examine the ability of bank capital measures to explain<br />

the cross-country impact. We include the bank capital measure in a regression of the<br />

form:<br />

r = a + b ρ + b (1 − capitalmeasure) + u<br />

(9)<br />

j j j<br />

C<br />

1 2<br />

The term (1 − capitalmeasure)<br />

converts the capital measure to a measure of leverage<br />

of the banking system, rather than its soundness.<br />

As in the case of our country regressions we examine both the ratio of book capital<br />

including subordinated and hybrid debt to total assets and the Basel capital adequacy<br />

ratio. In addition, in the case of the individual banks we measure the ratio of book<br />

equity capital (common stock plus preferred) to total assets.<br />

9 The regression is similar to a cross-sectional regression of ex post returns on betas. The independent<br />

variable is measured with error, which can lead to an errors-in-variables problem. This is the reason<br />

that it is common to form portfolios of stocks before using betas in tests of other cross-sectional<br />

relationships. In the case of the country-level regressions our correlation measure is based on a<br />

portfolio rather than individual stocks, so our test should suffer less from errors-in-variables bias than<br />

tests that use individual stocks.<br />

42


The Basel measure of leverage uses risk-weighted assets as the denominator and<br />

therefore captures the asset characteristics of the different banks. It seeks to take<br />

account of the relative riskiness of the assets of the different banks, and in principle<br />

should be a more accurate measure of the ability of the banks to withstand shocks.<br />

Table 6 shows the results of regressing the standardized returns on both the<br />

correlation with the index and the capital measure for three samples: countries,<br />

individual banks, and those individual banks for which we have all three leverage<br />

measures. The last of these provides a horse race between the three measures using an<br />

identical sample.<br />

All the leverage variables in Table 6 have the expected sign. The horse race in Panel<br />

6C suggests that in the case of the individual banks, the equity ratio is somewhat more<br />

informative, and, given the much larger sample size provided by this measure, we<br />

employ it in subsequent regressions.<br />

Panel 6C also shows, using a common sample, that the complex risk-weighting in the<br />

asset calculation of the Basel II measure does not offer any improvement over the<br />

other ratios in explaining the effect of the crisis on bank returns. This result is robust<br />

in tests that include other variables in the regression. The finding lends force to<br />

criticisms of the ability of the Basel measure to indicate the exposure of a country’s<br />

banking system. Since the Basel measure must capture some of the risk<br />

characteristics of different assets, its poor performance in explaining crisis exposure<br />

suggests that banks have been adept at taking advantage of its loopholes, making it<br />

less informative as a measure of crisis risk than a simple equity ratio.<br />

A more extreme hypothesis has been suggested in IMF (2008). 10 This suggests that<br />

banks which made aggressive use of the Basel rules were punished by the capital<br />

markets in the crisis. The final column in Panels A and B tests this hypothesis. It<br />

includes the difference between the Basel ratio and the simple leverage measure, as<br />

well as the leverage measure itself. If banks were punished for making aggressive use<br />

of the Basel rules, we expect this coefficient to be significantly negative, but it is not.<br />

Thus we conclude that the Basel risk adjustments were not informative regarding<br />

crisis risk, rather than that banks were actively punished for taking advantage of these<br />

rules.<br />

5. Country-level results<br />

In this section we describe our country-level results with the expanded regression, and<br />

then we look in the next section at how well these results are confirmed by the sample<br />

of individual banks.<br />

5.1 Banking sector fragility, balance sheet measures of international linkages,<br />

and the importance of banking in the economy<br />

We test whether adding other measures to the country-level regression improves our<br />

ability to understand the international impact of the crisis. We include the fragility of<br />

the banking system, measured by the proportions of bank liabilities financed with<br />

10 See the discussion of Figure 1.17.<br />

43


demand deposits and time deposits. Although the correlation measure explains part of<br />

the international propagation of the crisis, it is possible that it does not capture all<br />

dimensions of the international exposure of a country’s banking sector. So we also<br />

include the ratio of total bank assets to GDP, as a measure of the importance of<br />

banking in the economy. Finally, we include the proportion of foreign assets in the<br />

aggregate balance sheet. Table 7 shows the results of including these variables in an<br />

additive regression using the capital ratio as the measure of leverage. All the<br />

variables, except ASSETS/GDP, are scaled to be in the range [0,1] and all are<br />

constructed to have an expected coefficient less than zero.<br />

In the first two columns of Table 7 all the coefficients are negative as predicted, and<br />

the equation with all the added variables (column 2) explains 57% of the crosssectional<br />

variation in the normalised returns. The most significant variables are the<br />

prior correlation with the US, the capital variable, the level of time deposits, and the<br />

relative size of the banking sector. By contrast, the demand deposit variable and the<br />

proportion of foreign assets play little role. With the inclusion of the additional<br />

explanatory variables the regional patterns in the residuals largely disappear. 11 Thus<br />

the difference between the performance of banks in advanced and emerging<br />

economies can be almost entirely explained by simple measures of their liability<br />

structure and their importance in the economy.<br />

The second column in Table 7 incorporates all those variables that we hypothesised<br />

would be related to subsequent returns. For ease of reference we call it our principal<br />

country-level regression.<br />

The prior correlation with US bank stocks serves as a proxy for bank linkages that are<br />

not captured by our other independent variables. In the third column of Table 7 we<br />

omit this variable to see whether the remaining independent variables are able to take<br />

up the slack. The removal of the correlation variable provides little help for our other<br />

measures of bank linkages, namely the relative size of the banking sector and the level<br />

2<br />

of foreign claims. The adjusted R of this reduced regression is lower, but remains<br />

respectable at .51.<br />

5.2 The role of derivatives<br />

The data available to measure the derivatives’ usage by different countries’ banking<br />

sectors are of lower quality than the data for our other variables. BIS data on<br />

aggregate derivatives usage measure only the market value of positions with positive<br />

value for each of the countries in our sample. This could greatly underestimate the<br />

exposure arising from derivative positions, which is likely to depend more on the<br />

gross face value of positions both long and short, rather than on the net market value<br />

of long positions.<br />

We use instead data that measure the total amount of counterparty derivative<br />

exposure, as defined by BIS, for the combined long positions held by banks in 24<br />

11 The average residual standardized return is -.03 for advanced economies, and +.03 for emerging and<br />

developing economies. The equivalent values for the standardized returns themselves were -.54 and<br />

-.16.<br />

44


major countries. The BIS reports how much of these aggregate derivative positions<br />

are held against counterparties in each country. 12 We use these total amounts and<br />

deflate them by the aggregate bank assets for each country.<br />

The fourth column of Table 7 shows the result when the derivatives variable is<br />

included. Although the variable has the correct sign in the bivariate regression, it no<br />

longer does so when the other variables are included. The coefficient is insignificant<br />

2<br />

and the R is unchanged. Hence with the available data we are unable to find an<br />

influence of derivative usage on the impact of the crisis. It may well be, however, that<br />

such a relationship does exist and is masked by the poor quality of the available data<br />

for this purpose.<br />

5.3 Other Variables<br />

The final two columns of Table 7 introduce the four additional variables that have<br />

been suggested as associated with differential bank returns. The simple correlations<br />

reported in Table 5 suggested that prior GDP growth was significantly and positively<br />

associated with subsequent crisis returns. In the multiple regression this relationship<br />

almost entirely disappears. Thus, once we allow for country differences in<br />

correlations with US bank returns and balance-sheet structure, GDP growth and prior<br />

stock returns have little to contribute.<br />

In Table 7 the coefficients on the two governance variables have the opposite sign and<br />

neither is significant. 13 Again, it remains possible that returns are truly related to<br />

specific characteristics of corporate governance that are not picked up by our general<br />

indexes. However, given the lack of any strong priors as to which characteristics<br />

could matter, any exploration of this possibility would involve a substantial risk of<br />

data mining.<br />

6. Individual bank results<br />

We now examine whether similar relationships hold at the level of individual banks.<br />

The main results are contained in Table 8. 14<br />

6.1 Measures of the comovement between individual bank returns and the US<br />

bank index<br />

The first column of Table 8 shows the relationship between the standardized return<br />

during the crisis and the prior correlation with the US banking index. Despite the<br />

significant measurement error in the independent variable, the coefficient is highly<br />

significant and the variable explains almost a quarter of the variance in subsequent<br />

returns. The ability of past correlation estimates to explain subsequent returns may be<br />

reduced by possible instability in the correlations between the pre-crisis and crisis<br />

periods. Therefore, we again test the hypothesis of no change in the underlying<br />

12 This is reported in Table 9B of BIS, International Financial Statistics.<br />

13 Since the two governance measures are quite highly correlated, we also added them separately into<br />

the regression. The coefficients continued to have different signs and to be insignificant.<br />

14 Given the varying sample sizes, one should be cautious when drawing comparisons between different<br />

columns in Table 8.<br />

45


correlations and reject the hypothesis for 16% of our sample at the 5% significance<br />

level.<br />

6.2 Regression of returns on the correlations and balance-sheet variables<br />

The remaining columns of Table 8 summarize the results from progressively<br />

introducing the balance-sheet variables. Since some items of data are available for<br />

only a subset of banks, the samples vary between regressions. To facilitate<br />

2<br />

comparison between regressions, the final row in the table shows the R when<br />

regression (3) is rerun using the same subset of data.<br />

The second column of the table includes the ratio of equity to assets, the two deposit<br />

variables and the two country-level variables. All the coefficients have the predicted<br />

sign and all except the foreign claims measure are significant at the 5% level or better.<br />

The coefficients on the two deposit variables are broadly similar, and this suggests<br />

that we can usefully increase the sample size by replacing them with total deposits.<br />

2<br />

Regression (3) shows that, when we do this, the R increases to .46. All the<br />

coefficients continue to have the predicted sign and all except foreign claims are<br />

significant at the 1% level. Regression (3) corresponds most closely to our principal<br />

country-level regression. For ease of reference, we term it the principal individualbank<br />

regression.<br />

As in the case of the country analysis, the standardized returns are considerably higher<br />

for banks in emerging countries. However, almost all this difference is explained by<br />

our independent variables. The relatively strong performance of emerging-country<br />

banks is the result of their relative independence from the US banking market and<br />

their more robust financial structure. 15<br />

In line with our country regressions, we test whether omission of the correlation<br />

variable results in more emphasis being placed on the remaining independent<br />

variables. Column (4) of Table 8 shows that, with the exception of the foreign claims<br />

variable, the coefficients are somewhat larger in magnitude and more significant.<br />

2<br />

However, these effects are relatively modest. The adjusted R of this reduced<br />

regression is .40.<br />

The simple correlation between standardized return and the short-term debt ratio was<br />

strongly negative, suggesting that this variable should add to the explanatory power of<br />

the equation. However, for most banks deposits and short-term debt together<br />

constitute a high fraction of a bank’s funding. Including both variables would come<br />

close to over-identifying the regression. Therefore, in column (5) of Table 8 we<br />

substitute short-term debt for total deposits. The coefficient on short-term debt is<br />

negative as predicted and strongly significant, but the coefficient on the equity ratio is<br />

2<br />

no longer significant and the R is reduced.<br />

15 The average residual standardized return is -.01 for advanced economies, and +.02 for emerging and<br />

developing economies. The equivalent values for the standardized returns themselves were -.28 and -<br />

.11.<br />

The average residual for advanced economies is -.01, and that for emerging and developing economies<br />

is .02, which is insignificant.<br />

46


The last three columns of Table 8 repeat our principal regression with the addition of<br />

the remaining balance-sheet variables and the three measures suggested by previous<br />

empirical research. Missing data are a problem in these regressions and therefore we<br />

introduce the variables to maintain as far as possible the sample size. Regression (6)<br />

includes the two interbank measures, the prior stock return, and the growth in GDP.<br />

Only the last is significant, but it is quite highly correlated with the measure of bank<br />

claims as a proportion of GDP, which now ceases to be significant.<br />

Regression (7) omits the two interbank variables, but includes instead the level of<br />

mortgages. The coefficient on mortgage loans is significant but, contrary to<br />

predictions, positive.<br />

The final column of Table 8 shows the results of adding to our principal regression the<br />

Corporate Governance Quotient (CGQ®). The coefficients on the correlation with<br />

the US banking index and on the balance-sheet variables remain negative and for the<br />

most part significantly so. In contrast to the Beltratti and Stulz (2009) study, the<br />

2<br />

coefficient on the governance variable is positive though not significant. The R is<br />

increased at .55. However, this is simply due to the changed sample. The final row<br />

2<br />

shows that an almost identical R is obtained when our principal regression is re-run<br />

using the same sample.<br />

7. A Measure of Extreme Comovement<br />

If the comovement between banks differs during periods of turbulence, then simple<br />

measures of correlation during normal periods may not be the best predictor of<br />

comovement during the crisis. Acharya, Pedersen, Philippon, and Richardson (2010)<br />

propose a measure equal to the return on a bank stock in the worst 5% of weeks for<br />

the index return during the pre-crisis period. 16 Therefore, we check whether this stock<br />

market measure of exposure to the crisis performs better than the correlation.<br />

Translated to the current context, we measure this as the average standardized bank<br />

return in the 5% of weeks that the US bank stock index gave the worst returns in the<br />

period prior to the crisis (the “bad-weeks return”). Panel A of Table 9 compares the<br />

effect of using this variable instead of the Pearson correlation coefficient in the<br />

country regression. Columns 1 and 2 show the results for a simple regression of the<br />

standardized return on the measures of comovement, Columns 3 and 4 incorporate<br />

additional explanatory variables, while Column 5 includes both measures of<br />

comovement in the one regression. Panel B provides a similar set of comparisons for<br />

individual banks. 17<br />

Regardless of whether we use country-level or individual-bank data, the simple<br />

regression of standardized return on the bad-weeks return gives a lower adjusted R 2 ,<br />

indicating that this variable captures less information relevant to the international<br />

transmission of the crisis than the simple correlation measure. Columns 3 and 4 in<br />

each panel shows that this relatively poor performance carries over when we add other<br />

variables to the analysis, and Column 5 shows that tr of the correlation coefficient<br />

16<br />

For similar studies that have focused on extreme values to measure contagion, see Bae et al (2003)<br />

and Gropp and Moerman (2003).<br />

17<br />

Note that a high value for the bad-weeks-return variable implies a low correlation with the US index.<br />

47


continues to have greater explanatory power when both variables are included in the<br />

regression..<br />

The relatively poor performance of the bad weeks’ return variable differs from the<br />

result that Acharya et al find in a test of the domestic US impact across different<br />

financial institutions. 18 Their variable is derived from the worst days for the US bank<br />

stock index in the period June 2006 to June 2007. We use weekly rather than daily<br />

data because time-differences between stock exchanges make daily data unreliable in<br />

international studies. In our context, the failure of the bad-weeks variable to predict<br />

the cross-sectional impact of the crisis indicates that the bad weeks that happened<br />

during generally good times did not contain useful information about the behaviour in<br />

a crisis. So there must have been a difference between the international linkages that<br />

operated during those “bad weeks in good times” and those that operated during the<br />

crisis. The success of this variable in a domestic US context compared with its failure<br />

internationally illustrates the potential danger of extrapolating US results to the<br />

international context.<br />

8. Robustness<br />

The coefficients on our principal variables uniformly have the predicted sign and are<br />

generally highly significant. The exception is the measure of the relative importance<br />

of foreign claims, but even in this case the coefficient is consistently negative and for<br />

the most part hovers on the borders of significance. The size of each coefficient<br />

generally varies little with changes in model specification and sample size.<br />

We perform two robustness checks. The first is to test for thin-trading bias and the<br />

second is to check the sensitivity of our model to variations in the period over which<br />

the returns are measured.<br />

8.1 Thin trading<br />

For some of our country indexes thin trading may have biased our estimates of the<br />

prior standard deviation and the correlation with the US bank index. To test for the<br />

possible effect of thin trading, we repeated our principal regressions using bi-weekly<br />

2<br />

returns to estimate the correlation. In the country regression, the R improved from<br />

.57 to .60. All the coefficients had the predicted sign and all except the coefficients<br />

on foreign assets and demand deposits remained significant at the 5% level or better.<br />

Thin trading is equally a potential problem in our individual bank sample, where some<br />

of the banks have a large majority shareholder. As a result, the free float is small and<br />

the shares suffer from thin trading. As in the case of the country-level regressions, we<br />

repeated our analysis using bi-weekly returns. The results were little changed. For the<br />

main regression, all the coefficients were negative and all except the coefficient on<br />

foreign assets were significant at the 5% level or better. The R 2 was reduced from .38<br />

to .34. We also assessed the potential thin-trading bias by omitting the 11 banks<br />

where there were (arbitrarily) 30 or more weekly returns of zero. The results were<br />

almost identical.<br />

8.2 Bank Returns during the rebound<br />

48


By October 2010 the US banking index had rebounded by two-thirds, though it was<br />

still nearly 60% below its 2007 high. We examined how far our principal variables<br />

could explain the variation in country banking returns during the entire period May<br />

2007 to October 2010 that included both the slump and partial rebound. The result for<br />

the country variables is shown in Table 10.<br />

This extension to our forecasting period places much higher demands on our data.<br />

For example, by 2010 many large banks had been nationalized or acquired, so that the<br />

components of the Datastream index were substantially different from four years<br />

earlier. The regression now explains about one third of the cross-sectional dispersion<br />

in bank index returns. All the coefficients have the predicted sign and the timedeposit<br />

and bank-capital variables are significant at the 1% level. It is possible that the<br />

decline in the performance of the regression is due to measurement problems or it<br />

could be that once the crisis was over the variables which predict impact at the height<br />

of the crisis had become less important. We leave this as an issue for further study.<br />

9. Conclusions<br />

We have shown that the cross-sectional incidence of the crisis was related to:<br />

• The pre-existing correlation of the banking sector with the US;<br />

• The equity ratio measured relative to an unadjusted balance sheet;<br />

• The fragility of financing as measured primarily by the proportion of assets<br />

funded by deposits;<br />

• Banking assets as a proportion of GDP<br />

These results were strongly significant and robust to changes in sample and model<br />

specification. In addition, there was consistent but less significant evidence that bank<br />

exposure to the crisis was related to the proportion of foreign claims.<br />

We have shown that the significant leverage ratio was that measured relative to the<br />

unadjusted balance sheet rather than Basel risk-weighted assets, but that banks were<br />

not penalized for taking advantage of Basel rules. We have also shown that the most<br />

informative measure of exposure derived from past returns was the correlation, not the<br />

“bad weeks return” variable. We find that both stock market and structural variables<br />

should be combined to give a more complete specification of the relationships which<br />

caused differential crisis impacts. Omitting either could lead to misidentification of<br />

the causes. Our results are robust to using individual bank and country index data. The<br />

framework is linear in the characteristics of individual banks and therefore gives an<br />

aggregate measure of systemic risk which is consistent with the measures for<br />

individual banks. However, we find that results for some other variables are sensitive<br />

to data availability and sample selection. We detect no evidence that crisis impact was<br />

related to the quality of governance, or the prior share return. There is some evidence<br />

of a connection with the prior growth in GDP, though this may well be a result of<br />

multicollinearity, in particular the association between GDP growth and the relative<br />

importance of the banking sector.<br />

49


Our results are economically significant. This is most simply illustrated by the<br />

bivariate analysis in Table 10, which shows the effect of our principal explanatory<br />

variables on the returns of individual banks. The table groups the banks into quartiles<br />

based on the magnitude of each variable and shows the mean weekly bank return for<br />

each quartile. With one modest exception the average decline in the value of the first<br />

quartile banks (those with the lowest ratios) is less than half that of banks in the fourth<br />

quartile.<br />

Our results illustrate the value of using the international impact of the crisis as a<br />

natural experiment to test the robustness of empirical results found using US data.<br />

Since recent international policy recommendations such as those in Acharya, Cooley,<br />

Richardson, and Walter (2010), Financial Economists Roundtable (2010), Kane<br />

(2010), Squam Lake Group (2010), and Scott (2010) are based either explicitly or<br />

implicitly on assumptions about empirical relationships it is important that their<br />

empirical foundations be robust to this type of analysis.<br />

We also show the importance of deriving the empirical test of the propagation<br />

mechanism from an understanding of the specific mechanism that operated. Our<br />

results are not intended to be a model of the propagation of all international financial<br />

crises. We derived the test from knowledge that the crisis was primarily a bankingsector<br />

crisis which then spread to the remainder of the economy and that the crisis<br />

originated in the US, so its propagation should depend on linkages with the US.<br />

Future crises will not necessarily share these characteristics. However, regressions<br />

that omitted the correlation variable continued to have strong explanatory power. 19<br />

The implications of our results for controlling future risks depend on which features<br />

of the recent crisis are likely to operate in a similar way in future crises. Policymakers<br />

can influence four of the variables which we find to be important, -- capital ratios,<br />

how banks are financed, international transactions between banks, and to a lesser<br />

extent the size of the banking sector. They do not have direct control over the<br />

correlation with other banking sectors, but they do have influence over the<br />

international linkages between banks.<br />

Our results show that the important balance sheet variables to regulate in order to<br />

protect a country's banking system are the amount of the banking sector that is<br />

financed with liabilities other than capital and deposits. This is potentially much<br />

simpler than the Basel approach. We do not find that the refinement and<br />

sophistication of the Basel risk-adjusted ratio helps to explain the cross-country<br />

impact of the crisis in our test.<br />

Since it was the differential impact of the banking crisis in different countries that led<br />

to broader differences in their economic performance during the crisis, our findings<br />

could be extended to attempt to measure the impact on other economic variables, such<br />

as GDP. To do that it would be necessary to embed our model of banking sector<br />

linkages in an extended model which includes the linkage between the banking sector<br />

and aggregate economic activity.<br />

19 The correct specification of market index is likely to be important. We reran our principal<br />

regression for the individual banks using the correlation with the world stock market index rather than<br />

the US banking index. The results were very similar to the regression with no correlation measure at<br />

all.<br />

50


References<br />

Acharya, Viral V, Thomas F. Cooley, Matthew P. Richardson, Ingo Walter,<br />

2010, Regulating Wall Street: The Dodd-Frank Act and the New Architecture of<br />

Global Finance, New York University Stern School of Business.<br />

Acharya, Viral V, Lasse H Pedersen, Thomas Philippon, and Matthew<br />

Richardson, 2010, “Measuring Systemic Risk,” working paper, New York University<br />

Stern School.<br />

Allen Franklin, and Douglas Gale, D (2000), “Financial Contagion.” Journal<br />

of Political Economy, 108(1), 1–33.<br />

Allen Franklin, and Douglas Gale, D (2007), Understanding Financial Crises,<br />

Oxford University Press.<br />

Allen, Franklin, Aneta Hryckiewicz, Oskar Kowalewski, and Günseli Tümer-<br />

Alkan, “Transmission of Bank Liquidity Shocks in Loan and Deposit Markets: The<br />

Role of Interbank Borrowing and Market Monitoring,” University of Amsterdam,<br />

November 2010<br />

Bae, Kee-Hong, Karolyi, G. Andrew, René M. Stulz, R., 2003.” A New<br />

Approach to Measuring Financial Contagion,” Review of Financial Studies 16, 717-<br />

763.<br />

Beltratti, Andrea, and René M. Stulz , 2009, “Why Did Some Banks Perform<br />

Better during the Credit Crisis? A Cross-Country Study of the Impact of Governance<br />

and Regulation,” ECGI Working Paper Series in Finance, Working Paper N°.<br />

254/2009, July.<br />

Bennett, Paul, and Jeanette Kelleher., 1988, “The international transmission of<br />

stock price disruption in October 1987.” Quarterly Review of the Federal Reserve<br />

Bank of New York, Summer, 17-33.<br />

Calvo, Guillermo, 2005, Emerging Capital Markets in Turmoil: Bad Luck or<br />

Bad Policy, Cambridge, MA, MIT Press.<br />

Corsetti, Giancarlo, Marcello Pericoli, and Massimo Sbracia, , 2002, “Some<br />

Contagion, Some Interdependence: More Pitfalls in Tests of Financial Contagion,”<br />

CEPR Discussion Paper 3310.<br />

Djankov, Simeon, Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei<br />

Shleifer, 2008, “The Law and Economics of Self-dealing,” (December 2005).<br />

Available at SSRN: http://ssrn.com/abstract=864645<br />

Djankov, Simeon, Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei<br />

Shleifer, 2008, “The Law and Economics of Self-dealing,” Journal of Financial<br />

Economics, 88 (3), 430-465.<br />

Financial Economists Roundtable, 2010, “Reforming the OTC Derivatives<br />

Market,” Journal of Applied Corporate Finance 22.3, 40-47.<br />

Forbes, Kristin and Roberto Rigobon, 2002, “No Contagion, Only<br />

Interdependence: Measuring Stock Market Comovements,” Journal of Finance 57 (5),<br />

2223-2261.<br />

Frankel, Jeffrey A. and George Saravelos, “Are Leading Indicators of<br />

Financial Crises Useful for Assessing Country Vulnerability? Evidence from the<br />

2008-09 Global Crisis,” NBER Working Paper No. 16047, June 2010.<br />

Gropp Reint, and Gerard Moerman, 2003, “Measurement Of Contagion In<br />

Banks´ Equity Prices,” European Central Bank, Working Paper No. 297, December.<br />

Herring, Richard J., and Susan Wachter, 1999, “Real Estate Booms and<br />

Banking Busts: An International Perspective,” Group of Thirty Occasional Papers.<br />

IMF, 2008, Global Financial Stability Report, April.<br />

51


Kane, Edward, 2010, “The Importance of Monitoring and Mitigating the<br />

Safety-Net Consequences of Regulation-Induced Innovation,” working paper, Boston<br />

College.<br />

King, Mervyn and Sushil Wadhwani, 1990, “Transmission of Volatility<br />

Between Stock Markets,” Review of Financial Studies, 3, 5-33<br />

Kirkpatrick, Grant, 2008, “The Corporate Governance Lessons from the<br />

Financial Crisis,” OECD report, Paris.<br />

La Porta, Rafael, Florencio Lopez-de-Silanes, Andrei Shleifer, and Robert<br />

Vishny, 1998, “Law and Finance,” Journal of Political Economy 106, 1113-1155.<br />

Lane, Philip R, and Gian Maria Milesi-Ferretti, 2010, The cross-country<br />

incidence of the global crisis, forthcoming IMF Economic Review.<br />

Navaretti, Giorgio B. Giacomo Calzolari, Alberto F.. Pozzolo, and Micol.<br />

Levi, 2010, “Multinational Banking in Europe: Financial Stability and Regulatory<br />

Implications Lessons from the Financial Crisis,” Centro Studi Luca d’Agliano<br />

Working Paper No 292.<br />

Popov, Alexander A., and Gregory. F. Udell, 2010, “Cross-border Banking<br />

and the International Transmission of Financial Distress During the Crisis of 2007-<br />

2008,” Working Paper Series 1203, European Central Bank<br />

Reinhart, C. and K. Rogoff, 2008, “Is the 2007 U.S. Subprime Crisis So<br />

Different? An International Historical Comparison”, American Economic Review 98,<br />

339–344.<br />

Rose, Andrew K, and Mark M Spiegel, 2009, “Cross-country Causes and<br />

Consequences of the 2008 crisis: Early Warning,” working paper, Italian Ministry of<br />

Economy and Finance.<br />

Rose, Andrew K, and Mark M Spiegel, 2010, “Cross-country Causes and<br />

Consequences of the 2008 crisis: International Linkages and American Exposure,”<br />

Pacific Economic Review, forthcoming.<br />

Squam Lake Group, 2010, The Squam Lake Report: Fixing the Financial<br />

System, Princeton: Princeton University Press.<br />

Scott, Kenneth E, 2010, “The Financial Crisis: Causes and Lessons,” Journal<br />

of Applied Corporate Finance 22.3, 22-29.<br />

Wolf, Holger C., 2000, “Regional Contagion Effects,” Unpublished working<br />

paper. George Washington University, Washington, DC.<br />

Yuan, Kathy, 2005, “Asymmetric Price Movements and Borrowing<br />

Constraints: A Rational Expectations Equilibrium Model of Crisis, Contagion, and<br />

Confusion,” Journal of Finance 60, 379-411.<br />

52


Figure 1: The relationship between crisis returns and prior correlation with the<br />

US banking sector<br />

The figure shows the relationship between the normalised return during the crisis and the prior<br />

correlation with the US banking sector for the country sample. Raw crisis returns are average<br />

percentage weekly returns in the period May 2007 to March 2009. Standard deviations (percent per<br />

week) are calculated using weekly data for the calendar years 2005-2006. The standardized return is<br />

calculated as the ratio of the raw crisis return to the standard deviation. Correlation is the correlation of<br />

the bank industry index with the bank industry index for the US using weekly data from January 2005<br />

to December 2006.<br />

53


Table 1<br />

The country sample<br />

The table shows the country sample. Raw crisis returns are average percentage weekly returns in the<br />

period May 2007 to March 2009. Standard deviations (percent per week) are calculated using weekly<br />

data for the calendar years 2005-2006. The standardized return is calculated as the ratio of the raw<br />

crisis return to the standard deviation. All returns are for Datastream country banking sector stock<br />

indices.<br />

Country Mean Std. dev. Standar- Country Mean Std.dev. Standar-<br />

crisis<br />

return<br />

2005-6<br />

dized<br />

return<br />

crisis<br />

return<br />

2005-6<br />

dized<br />

return<br />

Argentina -1.19 2.96 -0.403 Korea -0.91 3.41 -0.268<br />

Australia -0.62 1.56 -0.396 Malaysia -0.39 1.49 -0.263<br />

Austria -1.39 3.16 -0.441 Malta -0.61 3.31 -0.183<br />

Belgium -2.27 2.09 -1.088 Mexico -0.27 3.52 -0.076<br />

Brazil -0.18 3.56 -0.050 Netherlands -1.32 2.13 -0.620<br />

Bulgaria -1.92 5.77 -0.333 Norway -0.95 2.78 -0.341<br />

Canada -0.59 1.45 -0.410 Pakistan -0.99 5.33 -0.186<br />

Chile -0.13 1.83 -0.071 Peru -0.49 3.90 -0.127<br />

China 0.03 3.36 0.010 Philippines -0.64 2.77 -0.231<br />

Colombia -0.11 4.73 -0.024 Poland -0.95 3.46 -0.273<br />

Cyprus -1.65 3.60 -0.460 Portugal -1.28 1.59 -0.804<br />

Czech -0.55 4.20 -0.130 Romania -1.56 5.94 -0.262<br />

Denmark -1.65 1.97 -0.833 Russia -1.40 5.50 -0.254<br />

Finland -0.86 3.11 -0.276 Singapore -0.86 1.95 -0.440<br />

France -1.49 2.14 -0.697 Slovenia -1.05 2.18 -0.483<br />

Germany -1.62 2.00 -0.811 S Africa -0.46 3.99 -0.116<br />

Greece -1.49 3.17 -0.470 Spain -1.01 1.84 -0.547<br />

Hong Kong -1.00 1.43 -0.698 Sri Lanka -0.34 2.90 -0.117<br />

Hungary -1.54 5.56 -0.278 Sweden -1.01 2.47 -0.409<br />

India -0.29 4.22 -0.068 Switzerland -1.28 2.29 -0.560<br />

Indonesia -0.16 3.57 -0.043 Taiwan -0.28 3.36 -0.083<br />

Ireland -2.60 2.31 -1.122 Thailand -0.37 3.36 -0.111<br />

Israel -0.87 3.02 -0.289 Turkey -0.46 4.87 -0.095<br />

Italy -1.44 1.71 -0.838 UK -1.36 1.66 -0.821<br />

Japan -0.85 3.57 -0.237 Venezuela -0.35 2.95 -0.120<br />

54


Table 2<br />

Summary statistics for the country sample<br />

The table shows summary statistics for the country sample. All returns are for Datastream country<br />

banking sector stock indices. Raw crisis returns are average percentage weekly returns in the period<br />

May 2007 to March 2009. Standard deviations (percent per week) are calculated using weekly data for<br />

the calendar years 2005-2006. The standardized return is calculated as the ratio of the raw crisis return<br />

to the standard deviation. Correlation is the correlation of the bank industry index with the US bank<br />

industry index using weekly data from January 2005 to December 2006. Bad weeks return is the return<br />

of the country index in the 5% of weeks in 2005-2006 during which the US bank index had the worst<br />

returns. This is standardized by the standard deviation of returns. Equity ratio is (1-Book<br />

Value(Equity))/Total Assets). Capital ratio is (1 - Book Value(Equity + Sub debt))/Total Assets).<br />

Basel ratio is (1-Basel risk-weighted capital ratio). Demand deposits is (1-Demand deposits/Total<br />

assets). Time deposits is (1-Time deposits/Total assets). Total deposits is (1-Total deposits/Total<br />

assets). Bank claims/GDP is the ratio of total bank assets to GDP. Foreign claims is the ratio of total<br />

foreign claims held by banks to total bank assets. Derivatives is the ratio of value of derivative<br />

positions to total bank assets. Anti-director rights and self-dealing are Djankov et al’s indices. Return in<br />

2006 is the average weekly return in 2006. GDP growth is the rate of growth in percent over the period<br />

2001-2006. Unless otherwise indicated all independent variables are measured at the end of 2006. Data<br />

sources are given in the Appendix.<br />

Variable Number<br />

of obs.<br />

Mean Std. Dev. Median Maximum Minimum<br />

Raw crisis return 50 -0.366 0.285 -0.278 0.010 -1.127<br />

Standardized crisis return 50 -0.009 0.006 -0.009 0.000 -0.026<br />

Correlation 50 0.269 0.163 0.273 0.572 -0.103<br />

Standardized bad weeks<br />

return 50 -0.403 0.427 -0.424 0.583 -1.187<br />

1 - Equity ratio 45 0.898 0.046 0.902 0.999 0.778<br />

1 - Capital ratio 49 0.921 0.028 0.927 0.970 0.850<br />

1 - Basel ratio 50 0.866 0.031 0.874 0.951 0.780<br />

1 - Demand deposits 48 0.814 0.118 0.847 0.979 0.335<br />

1 - Time deposits 48 0.646 0.174 0.652 0.961 0.254<br />

1 - Total deposits 48 0.460 0.180 0.434 0.858 0.039<br />

Bank claims/GDP 48 1.824 1.494 1.407 6.828 0.362<br />

Foreign claims 48 0.373 0.175 0.335 0.777 0.005<br />

Derivatives 47 0.017 0.013 0.012 0.052 0.000<br />

Antidirector rights 47 0.347 0.113 0.350 0.500 0.000<br />

Self-dealing 47 0.501 0.232 0.460 1.000 0.090<br />

Return in 2006 50 0.006 0.004 0.005 0.018 -0.002<br />

GDP growth 50 9.495 6.845 7.837 34.678 0.385<br />

55


Table 3<br />

Correlation Matrix for the country sample<br />

The table provides the matrix of Pearson correlation coefficients for our country sample. Variables are as defined in Table 2.<br />

1 2 3 4 5 6 7 8 9 10<br />

Standardized crisis return 1 1.00 0.83 -0.54 0.36 -0.41 -0.45 -0.26 -0.21 -0.42 -0.54<br />

Raw crisis return 2 0.83 1.00 -0.31 0.27 -0.39 -0.37 -0.16 -0.14 -0.48 -0.56<br />

Correlation 3 -0.54 -0.31 1.00 -0.55 0.36 0.40 0.22 0.35 0.11 0.34<br />

Standardized bad weeks return 4 0.36 0.27 -0.55 1.00 -0.29 -0.17 -0.03 -0.26 0.02 -0.15<br />

1 - Equity ratio 5 -0.41 -0.39 0.36 -0.29 1.00 0.57 0.38 0.05 0.16 0.18<br />

1 – Capital ratio 6 -0.45 -0.37 0.40 -0.17 0.57 1.00 0.69 -0.09 0.13 0.07<br />

1 - Basel ratio 7 -0.26 -0.16 0.22 -0.03 0.38 0.69 1.00 -0.21 0.10 -0.04<br />

1 - Demand deposits 8 -0.21 -0.14 0.35 -0.26 0.05 -0.09 -0.21 1.00 -0.29 0.38<br />

1 -Time deposits 9 -0.42 -0.48 0.11 0.02 0.16 0.13 0.10 -0.29 1.00 0.78<br />

1 - Total deposits 10 -0.54 -0.56 0.34 -0.15 0.18 0.07 -0.04 0.38 0.78 1.00<br />

Bank claims/GDP 11 -0.57 -0.44 0.22 0.01 0.49 0.21 0.07 0.37 0.28 0.51<br />

Foreign claims 12 -0.29 -0.41 -0.15 0.20 0.07 -0.16 -0.18 0.12 0.31 0.38<br />

Derivatives 13 -0.39 -0.24 0.42 -0.18 0.31 0.37 0.16 0.18 0.27 0.39<br />

Anti-director rights 14 0.02 0.07 0.03 0.20 0.01 -0.11 -0.19 0.44 -0.27 0.03<br />

Self-dealing 15 -0.07 0.11 -0.07 0.09 0.09 -0.01 0.11 0.33 -0.42 -0.19<br />

Return in 2006 16 0.18 -0.04 -0.21 0.30 -0.06 -0.21 0.00 -0.24 0.06 -0.10<br />

GDP growth 17 0.49 0.30 -0.41 0.22 -0.29 -0.45 -0.36 -0.33 -0.08 -0.29<br />

Note: Sample sizes may differ among cells<br />

56<br />

25


Table 3 (Continued)<br />

Correlation Matrix<br />

The table provides the matrix of Pearson correlation coefficients for our country sample Variables are as defined in Table 2.<br />

11 12 13 14 15 16 17<br />

Standardized crisis return 1 -0.57 -0.29 -0.39 -0.02 0.07 0.18 0.49<br />

Raw crisis return 2 -0.44 -0.41 -0.24 -0.07 -0.11 -0.04 0.30<br />

Correlation 3 0.22 -0.15 0.42 -0.03 0.07 -0.21 -0.41<br />

Standardized bad weeks return 4 0.01 0.20 -0.18 0.20 0.09 0.30 0.22<br />

Equity ratio 5 0.49 0.07 0.31 -0.01 -0.09 -0.06 -0.29<br />

Capital ratio 6 0.21 -0.16 0.37 0.11 0.01 -0.21 -0.45<br />

Basel ratio 7 0.07 -0.18 0.16 0.19 -0.11 0.00 -0.36<br />

Demand deposits 8 0.37 0.12 0.18 -0.44 -0.33 -0.24 -0.33<br />

Time deposits 9 0.28 0.31 0.27 0.27 0.42 0.06 -0.08<br />

Total deposits 10 0.51 0.38 0.39 -0.03 0.19 -0.10 -0.29<br />

Bank claims/GDP 11 1.00 0.55 0.29 -0.15 -0.30 -0.15 -0.50<br />

Foreign claims 12 0.55 1.00 0.18 -0.02 -0.11 0.29 0.01<br />

Derivatives 13 0.29 0.18 1.00 -0.13 -0.14 -0.20 -0.41<br />

Anti-director rights 14 0.15 0.02 0.13 1.00 0.54 -0.26 -0.16<br />

Self-dealing 15 0.30 0.11 0.14 0.54 1.00 -0.20 -0.12<br />

Return in 2006 16 -0.15 0.29 -0.20 0.26 0.20 1.00 0.48<br />

GDP Growth 17 -0.50 0.01 -0.41 0.16 0.12 0.48 1.00<br />

57<br />

26


Table 4<br />

Summary statistics for the individual bank sample<br />

The table shows summary statistics for the country sample. All returns are from Datastream. Raw crisis<br />

returns are average percentage weekly returns in the period May 2007 to March 2009. Standard<br />

deviations (percent per week) are calculated using weekly data for the calendar years 2005-2006. The<br />

standardized return is calculated as the ratio of the raw crisis return to the standard deviation.<br />

Correlation is the correlation of the bank return with the return on the US bank industry index using<br />

weekly data from January 2005 to December 2006. Bad weeks return is the return of the bank equity in<br />

the 5% of weeks in 2005-2006 during which the US bank index had the worst returns. This is<br />

standardized by the standard deviation of returns. Equity ratio is (1-Book Value of Equity))/Total<br />

Assets). Capital ratio is (1-Book Value(Equity + Sub debt + Hybrid debt))/Total Assets). Basel ratio is<br />

(1-Basel risk-weighted capital ratio). Demand deposits is (1-Demand deposits/Total assets). Time<br />

deposits is (1-Time deposits/Total assets). Total deposits is (1-Total deposits/Total assets). Bank<br />

claims/GDP is the ratio of total bank assets to GDP. Foreign claims is the ratio of total foreign claims<br />

held by banks to total bank assets. Short-term debt is the ratio of short-term debt liabilities to total bank<br />

assets. Interbank loans is the ratio of interbank assets to total bank assets. Due other banks is the ratio<br />

of interbank funding to total bank assets. Mortgages is the ratio of real estate mortgages to total bank<br />

assets. Governance is the CGQ index. Return in 2006 is the average weekly return in 2006. GDP<br />

growth is the rate of growth in percent over the period 2001-2006. Unless otherwise indicated all<br />

independent variables are measured at the end of 2006. Data sources are given in the Appendix.<br />

Variable Number<br />

of obs.<br />

Mean Std. Dev. Median Maximum Minimum<br />

Raw crisis return 381 -0.007 0.007 -0.006 0.007 -0.032<br />

Standardized crisis return 381 -0.212 0.233 -0.156 0.271 -1.211<br />

Correlation 381 0.160 0.136 0.163 0.580 -0.160<br />

Standardized bad weeks<br />

return 380 -0.978 1.948 -0.982 4.131 -5.525<br />

1 - Equity ratio 361 0.921 0.074 0.937 0.988 0.198<br />

1 - Capital ratio 273 0.904 0.067 0.913 0.982 0.194<br />

1 - Basel ratio 289 0.872 0.041 0.881 0.963 0.644<br />

1 - Demand deposits 256 0.815 0.141 0.853 1.000 0.389<br />

1 - Time deposits 264 0.628 0.225 0.630 1.000 0.146<br />

1 - Total deposits 358 0.365 0.213 0.335 1.000 0.059<br />

Bank claims/GDP 368 1.921 1.272 1.784 6.815 0.361<br />

Foreign claims 368 0.312 0.174 0.271 0.777 0.005<br />

Short-term debt 355 0.108 0.895 0.076 0.592 0.000<br />

Interbank loans 331 0.072 0.085 0.050 0.628 0.000<br />

Due to other banks 322 0.093 0.079 0.080 0.550 0.000<br />

Mortgages 234 0.175 0.193 0.102 0.815 0.000<br />

Governance (CGQ) 118 52.040 28.956 51.365 99.950 0.790<br />

Return in 2006 382 0.004 0.006 0.004 0.031 -0.012<br />

GDP growth 385 8.113 7.847 6.315 34.678 0.385<br />

58


Table 5<br />

Correlation Matrix for the individual bank sample<br />

The table provides the matrix of Pearson correlation coefficients for our individual bank sample. Variables are as defined in Table 4.<br />

1 2 3 4 5 6 7 8 9 10<br />

Standardized crisis return 1 1.00 0.90 -0.48 0.33 -0.19 -0.16 -0.15 -0.04 -0.35 -0.53<br />

Raw crisis return 2 0.90 1.00 -0.38 0.30 -0.14 -0.12 -0.14 -0.08 -0.30 -0.48<br />

Correlation 3 -0.48 -0.38 1.00 -0.59 0.18 0.12 0.10 0.09 0.22 0.31<br />

1 – Standardized bad weeks 4<br />

return<br />

0.33 0.30 -0.59 1.00 -0.02 -0.01 0.08 -0.17 -0.11 -0.33<br />

1 - Equity ratio 5 -0.19 -0.14 0.18 -0.02 1.00 0.96 0.63 0.00 -0.10 -0.20<br />

1 - Capital ratio 6 -0.16 -0.12 0.12 -0.01 0.96 1.00 0.59 0.02 -0.02 -0.17<br />

1 - Basel ratio 7 -0.15 -0.14 0.10 0.08 0.63 0.59 1.00 -0.20 0.25 -0.04<br />

1 - Demand deposits 8 -0.04 -0.08 0.09 -0.17 0.00 0.02 -0.20 1.00 -0.40 0.17<br />

1 - Time deposits 9 -0.35 -0.30 0.22 -0.11 -0.10 -0.02 0.25 -0.40 1.00 0.71<br />

1 - Total deposits 10 -0.53 -0.48 0.31 -0.33 -0.20 -0.17 -0.04 0.17 0.71 1.00<br />

Bank claims/GDP 11 -0.26 -0.16 0.16 0.08 0.29 0.22 0.23 0.03 0.07 0.02<br />

Foreign claims 12 -0.19 -0.18 -0.07 -0.03 -0.01 -0.03 -0.14 -0.08 -0.08 0.23<br />

Short-term debt 13 -0.45 -0.40 0.25 0.21 0.13 0.12 0.13 0.15 0.44 0.71<br />

Interbank loans 14 -0.14 -0.11 0.01 -0.13 -0.14 -0.17 -0.19 -0.04 -0.07 0.10<br />

Due to other banks 15 -0.09 -0.07 -0.02 -0.07 -0.18 -0.12 -0.21 -0.03 0.00 0.15<br />

Mortgages 16 -0.16 -0.08 0.01 -0.04 0.16 0.13 0.15 -0.14 0.17 0.32<br />

Governance (CGQ) 17 -0.09 -0.11 0.26 -0.21 0.27 0.21 0.10 0.06 0.08 0.17<br />

Return in 2006 18 -0.05 -0.10 -0.08 0.06 -0.11 -0.11 -0.20 -0.28 -0.05 0.14<br />

GDP growth 19 0.25 0.20 -0.22 -0.01 -0.25 -0.30 -0.30 -0.07 -0.13 0.01<br />

Note: Sample sizes may differ among cells<br />

59<br />

28


Table 5 (Continued)<br />

Correlation Matrix<br />

The table provides the matrix of Pearson correlation coefficients for our individual bank sample. Variables are as defined in Table 4.<br />

11 12 13 14 15 16 17 18 19<br />

Standardized crisis return 1 -0.26 -0.19 -0.45 -0.14 -0.09 -0.16 -0.09 -0.05 0.25<br />

Raw crisis return 2 -0.16 -0.18 -0.40 -0.11 -0.07 -0.08 -0.11 -0.10 0.20<br />

Correlation 3 0.16 -0.07 0.25 0.01 -0.02 0.01 0.26 -0.08 -0.22<br />

Standardized bad weeks return 4<br />

0.08 -0.03 0.21 -0.13 -0.07 -0.04 -0.21 0.06 -0.01<br />

1 - Equity ratio 5 0.29 -0.01 0.13 -0.14 -0.18 0.16 0.27 -0.11 -0.25<br />

1 - Capital ratio 6 0.22 -0.03 0.12 -0.17 -0.12 0.13 0.21 -0.11 -0.30<br />

1 - Basel ratio 7 0.23 -0.14 0.13 -0.19 -0.21 0.15 0.10 -0.20 -0.30<br />

1 - Demand deposits 8 0.03 -0.08 0.15 -0.04 -0.03 -0.14 0.06 -0.28 -0.07<br />

1 - Time deposits 9 0.07 -0.08 0.44 -0.07 0.00 0.17 0.08 -0.05 -0.13<br />

1 - Total deposits 10 0.02 0.23 0.71 0.10 0.15 0.32 0.17 0.14 0.01<br />

Bank claims/GDP 11 1.00 0.36 0.14 0.11 0.10 0.45 0.09 -0.20 -0.51<br />

Foreign claims 12 0.36 1.00 0.21 0.43 0.35 0.39 0.07 0.44 0.26<br />

Short-term debt 13 0.14 0.21 1.00 0.19 0.21 0.14 0.09 0.12 -0.05<br />

Interbank loans 14 0.11 0.43 0.19 1.00 0.79 0.03 0.02 0.25 -0.02<br />

Due other banks 15 0.10 0.35 0.21 0.79 1.00 -0.08 0.07 0.30 0.06<br />

Mortgages 16 0.45 0.39 0.14 0.03 -0.08 1.00 0.11 0.06 -0.15<br />

Governance 17 0.09 0.07 0.09 0.02 0.07 0.11 1.00 0.03 0.10<br />

Return in 2006 18 -0.20 0.44 0.12 0.25 0.30 0.06 0.03 1.00 0.43<br />

GDP growth 19 -0.51 0.26 -0.05 -0.02 0.06 -0.15 0.10 0.43 1.00<br />

60<br />

29


Table 6<br />

Comparison of different leverage measures<br />

This table presents regressions of the standardized crisis return using three different leverage measures.<br />

Panel A is for the country bank indices, Panel B for individual banks, and Panel C for individual banks<br />

using a common sample for all regressions. Equity ratio is the ratio of the book value of equity to bank<br />

assets. Capital ratio is the ratio of the book value of (equity + subordinated debt) to bank assets. Basel<br />

ratio is the Basel II risk-weighted capital ratio. Basel-Equity is the difference between the Basel and<br />

equity ratios. All other variables are as defined in the Appendix. Standardized crisis return is the<br />

average weekly return over the period May 2007 to March 2009 divided by the weekly standard<br />

deviation in 2005-2006. The independent variables are measured at the end of 2006. Estimation is by<br />

OLS. The table also reports the adjusted R-square and number of observations. T-statistics are given in<br />

parentheses.<br />

Panel 6A: Country regressions<br />

Dependent variable: Standardized crisis return<br />

Constant -.11*<br />

(-1.69)<br />

Correlation -.94***<br />

(-4.41)<br />

1 - Capital ratio<br />

1 - Basel ratio<br />

Basel-Capital<br />

Ratio<br />

Adjusted R<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

2<br />

.27 .33 .28 .32<br />

N 50 49 48 49<br />

Panel 6B: Individual bank regressions<br />

Dependent variable: Standardized crisis return<br />

Constant -.08***<br />

(-5.00)<br />

Correlation -.82***<br />

(-10.68)<br />

1 - Equity ratio<br />

1 - Capital ratio<br />

1 - Basel ratio<br />

Basel-Equity<br />

ratio<br />

(1) (2) (3) (4)<br />

2.48**<br />

(2.11)<br />

-.74***<br />

(-3.30)<br />

-2.87**<br />

(-2.20)<br />

(1) (2) (3) (4) (5)<br />

.22<br />

(1.65)<br />

-.80***<br />

(-9.88)<br />

-.33**<br />

(-2.24)<br />

1.04<br />

(1.07)<br />

-.88***<br />

(-4.05)<br />

-1.35<br />

(-1.19)<br />

.25<br />

(1.45)<br />

-.85***<br />

(-9.16)<br />

-.38**<br />

(-1.97)<br />

2.40*<br />

(1.99)<br />

-0.74***<br />

(-3.23)<br />

-2.75**<br />

(-2.04)<br />

-.60<br />

(-.40)<br />

0.44<br />

(1.65)<br />

-.82***<br />

(-8.59)<br />

-0.60**<br />

(-1.97)<br />

.87***<br />

(2.81)<br />

-.77***<br />

(-7.97)<br />

-1.04***<br />

(-3.01)<br />

-.04<br />

(-.10)<br />

Adjusted R 2<br />

.23 .24 .25 .22 .23<br />

N 381 361 273 289 287<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

61


Table 6 (continued)<br />

Panel 6C: Individual bank regressions (common sample)<br />

Dependent variable: Standardized crisis return<br />

Constant -.09***<br />

(-3.67)<br />

Correlation -.84***<br />

(-7.82)<br />

1 - Equity ratio<br />

1 - Capital ratio<br />

1 - Basel ratio<br />

(1) (2) (3) (4)<br />

1.98***<br />

(4.04)<br />

-.70***<br />

(-7.78)<br />

-2.25***<br />

(-7.03)<br />

.61<br />

(1.37)<br />

-.79***<br />

(-6.99)<br />

-.79<br />

(-1.58)<br />

.47<br />

(1.42)<br />

-.81***<br />

(-7.52)<br />

-.65*<br />

(-1.71)<br />

Adjusted R 2<br />

.21 .27 .22 .22<br />

N 223 223 223 223<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

62


Table 7<br />

Country-level determinants of crisis returns<br />

This table presents regressions of the standardized crisis return for the country bank indices on countrylevel<br />

variables, as defined in the Appendix. The standardized return is the average weekly return over<br />

the period May 2007 to March 2009 divided by the weekly standard deviation in 2005-2006. All the<br />

independent variables are measured at the end of 2006, with the exception of the correlation with the<br />

US index, which is estimated from weekly data for 2005-2006, and the rate of GDP growth, which is<br />

measured over the period 2001-2006. Estimation is by OLS. The table also reports the adjusted Rsquare<br />

and number of observations. T-statistics are given in parentheses.<br />

Dependent variable: Standardized crisis return<br />

(1) (2) (3) (4) (5)<br />

Constant 2.48** 2.99** 4.32*** 3.20** 1.83<br />

(2.11) (2.54) (3.76) (2.56) (1.32)<br />

Correlation coefficient -.74*** -.58***<br />

-.57** -.63**<br />

(-3.30) (-2.69)<br />

(-2.55) (-2.32)<br />

1 - Capital ratio<br />

-2.87** -2.83** -4.04*** -3.12** -1.58<br />

(-2.20) (-2.43) (-3.51) (-2.54) (-1.16)<br />

1 - Demand deposits<br />

-.18 -.57*<br />

-.15 -.09<br />

(-.57) (-1.87) (-.44) (-.26)<br />

1 - Time deposits<br />

-.39* -.54** -.37* -.46**<br />

(-2.00) (-2.66) (-1.78) (-2.05)<br />

Bank claims/GDP -.05* -.06* -.06** -.06<br />

(-1.94) (-1.95) (-2.09) (-1.55)<br />

Foreign claims -.27<br />

-.14<br />

-.29 -.31<br />

(-1.29) (-.61) (-1.30) (-1.28)<br />

Derivatives<br />

1.37<br />

(.52)<br />

Return in 2006<br />

.10<br />

(.01)<br />

GDP growth<br />

.00<br />

(.65)<br />

Anti-director rights<br />

.16<br />

(.47)<br />

Self-dealing -.16<br />

(-.93)<br />

Adjusted R 2<br />

.33 .57 .51 .57 .60<br />

N 49 47 47 46 45<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

63


Table 8<br />

Firm-level determinants of crisis returns<br />

This table presents regressions of the standardized crisis return for individual banks on firm-level<br />

variables, as defined in Table 4. The standardized return is the average weekly return over the period<br />

May 2007 to March 2009 divided by the weekly standard deviation in 2005-2006. All the independent<br />

variables are measured at the end of 2006, with the exception of the correlation with the US index,<br />

which is estimated from weekly data for 2005-2006, and the rate of GDP growth, which is measured<br />

over the period 2001-2006. Estimation is by OLS. The table also reports the adjusted R-square and<br />

number of observations. T-statistics are given in parentheses.<br />

Dependent variable: Standardized crisis<br />

return<br />

Constant -.08***<br />

(-5.00)<br />

Correlation -.82***<br />

(-10.68)<br />

1 - Equity ratio<br />

1 - Demand<br />

deposits<br />

1 - Time<br />

deposits<br />

1 - Total<br />

deposits<br />

Bank<br />

claims/GDP<br />

(1) (2) (3) (4) (5) (6) (7) (8)<br />

.80***<br />

(3.46)<br />

-.57***<br />

(-6.06)<br />

-.45**<br />

(-2.19)<br />

-.27***<br />

(-2.68)<br />

-.34***<br />

(-5.02)<br />

-.03**<br />

(-2.51)<br />

.77***<br />

(5.34)<br />

-.49***<br />

(-6.26)<br />

-.70***<br />

(-4.49)<br />

-.49***<br />

(-9.92)<br />

-.02***<br />

(-2.69)<br />

.94***<br />

(6.31)<br />

-.92***<br />

(-5.75)<br />

-.62***<br />

(-12.78)<br />

-.04***<br />

(-3.83)<br />

.24*<br />

(1.87)<br />

-.66***<br />

(-8.44)<br />

-.20<br />

(-1.40)<br />

-.02**<br />

(-2.16)<br />

.99***<br />

(4.32)<br />

-.48***<br />

(-5.35)<br />

-.97***<br />

(-4.00)<br />

-.49***<br />

(-8.57)<br />

-.01<br />

(-.86)<br />

Foreign claims -.09 -.12* -.06 -.18*** -.20**<br />

(-.90) (-1.95) (-.95) (-2.77) (-2.13)<br />

Short-term debt -.67***<br />

(-6.50)<br />

Interbank loans .08<br />

(.33)<br />

Due other banks<br />

Mortgages<br />

-.05<br />

(-.21)<br />

.77***<br />

(4.28)<br />

-.45***<br />

(-4.72)<br />

-.79***<br />

(-4.12)<br />

-.56***<br />

(-8.50)<br />

.01<br />

(.35)<br />

-.17*<br />

(-1.83)<br />

1.53*<br />

(1.91)<br />

-.21<br />

(-1.24)<br />

-1.56*<br />

(-1.79)<br />

-.64***<br />

(-6.45)<br />

-.03<br />

(-1.38)<br />

-.15<br />

(-1.07)<br />

Return in 2006<br />

-2.73<br />

.15**<br />

(2.10)<br />

-2.32<br />

(-1.13) (-.99)<br />

GDP growth<br />

.51** .59**<br />

(2.49) (2.57)<br />

Governance .00<br />

(1.23)<br />

Adjusted R 2<br />

.23 .38 .46 .40 .39 .49 .49 .55<br />

N 381 237 343 343 342 275 223 115<br />

Comparable<br />

adjusted R 2 for<br />

regression (3)<br />

-- .43 -- .46 .46 .49 .47 .55<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

64


Table 9 Regression of the standardized return during the crisis period on<br />

alternative measures of comovement during the crisis<br />

Panel A shows regressions of the standardized crisis return for the country bank indices on countrylevel<br />

variables, as defined in the Appendix. Panel B shows similar regressions for standardized returns<br />

for individual banks. In columns 1 and 3 the prior comovement is measured by the Pearson correlation<br />

coefficient between bank returns and the US index, which is estimated from weekly data for 2005-<br />

2006. In columns 2 and 4 it is measured by the average standardized return in the 5% of weeks where<br />

the US bank stock index gave the worst returns in the period prior to the crisis (the “bad weeks<br />

return”). Column 5 includes both measures of comovement in the same regression. The other<br />

independent variables are measured at the end of 2006. Estimation is by OLS. The table also reports<br />

the adjusted R-square and number of observations. T-statistics are given in parentheses.<br />

Panel 9A: Country regressions<br />

Dependent variable: Standardized crisis return<br />

(1) (2) (3) (4) (5)<br />

Constant -.11*<br />

(-1.69)<br />

Correlation<br />

-.94***<br />

coefficient<br />

(-4.41)<br />

Standardized bad<br />

weeks return<br />

-.27***<br />

( -5.13)<br />

.24**<br />

(2.65)<br />

2.99**<br />

(2.54)<br />

-.58***<br />

(-2.69)<br />

1 - Capital ratio -2.83*<br />

(-2.43)<br />

1 - Demand deposits -.18<br />

(-.57)<br />

1 - Time deposits -.39**<br />

(-2.00)<br />

Bank claims/GDP -.05*<br />

(-1.94)<br />

Foreign claims -.27<br />

(-1.29)<br />

3.67***<br />

(3.31)<br />

.19**<br />

(2.53)<br />

-3.52***<br />

(-3.19)<br />

-.29<br />

(-.93)<br />

-.46**<br />

(-2.41)<br />

-.06**<br />

(-2.24)<br />

-.22<br />

(-1.04)<br />

2.92**<br />

(2.52)<br />

-.42*<br />

(-1.79)<br />

.12<br />

(1.59)<br />

-2.81**<br />

(-2.46)<br />

-.10<br />

(-.31)<br />

-.38*<br />

(-1.99)<br />

-.06**<br />

(-2.13)<br />

-.29<br />

(-1.39)<br />

Adjusted R 2 .27 .11 .57 .57 .59<br />

N 50 50 47 47 47<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

Panel 9B: Individual bank regressions<br />

Dependent variable: Standardized crisis return<br />

(1) (2) (3) (4) (5)<br />

Constant -.08***<br />

(5.00)<br />

Correlation<br />

-.82***<br />

coefficient<br />

(-10.68)<br />

Standardized bad<br />

weeks return<br />

-.17***<br />

(-13.76)<br />

.04***<br />

(6.76)<br />

.77***<br />

(5.34)<br />

-.49***<br />

(-6.26)<br />

.88***<br />

(5.96)<br />

.02***<br />

(3.78)<br />

.77***<br />

(5.33)<br />

-.47***<br />

(-4.88)<br />

.00<br />

(.38)<br />

1 - Equity ratio -.70*** -.85*** -.70***<br />

(-4.49) (-5.40) (-4.48)<br />

1 - Total deposits -.49*** -.55*** -.49***<br />

(-9.92) (-10.83) (-9.73)<br />

Bank claims/GDP -.02*** -.04*** -.03***<br />

(-2.69) (-4.13) (-2.70)*<br />

Foreign claims -.12* -.06 -.12*<br />

(-1.95) (-.89) (-1.88)<br />

Adjusted R 2 .23 .11 .46 .42 .46<br />

N 381 379 343 342 342<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

65


Table 10<br />

Country-level determinants of crisis returns including the recovery<br />

This table presents regressions of the standardized crisis return including recovery for the country bank<br />

indices on country-level variables, as defined in Table 2. The standardized return including recovery is<br />

the average weekly return over the period May 2007 to October 2010 divided by the weekly standard<br />

deviation in 2005-2006. All the independent variables are measured at the end of 2006, with the<br />

exception of the correlation with the US index, which is estimated from weekly data for 2005-2006.<br />

Estimation is by OLS. The table also reports the adjusted R-square and number of observations. Tstatistics<br />

are given in parentheses.<br />

Dependent variable: Standardized crisis return including recovery<br />

(1)<br />

Constant 1.60**<br />

(2.25)<br />

Correlation -.01<br />

(-.05)<br />

Equity ratio<br />

-1.46**<br />

(-2.08)<br />

Demand deposits<br />

-.03<br />

(-.13)<br />

Time deposits<br />

-.34***<br />

(-2.86)<br />

Bank claims/GDP -.00<br />

(-.27)<br />

Foreign claims -.04<br />

(-.29)<br />

Adjusted R 2<br />

.27<br />

N 47<br />

*, **, and ***, significant at the 10, 5, and 1 percent level respectively.<br />

Table 11<br />

Economic significance of key measures<br />

The table shows mean weekly returns 2007-2009 for individual banks grouped into quartiles by the<br />

magnitude of their prior correlation with the US market and by key balance-sheet ratios. Correlation<br />

with the US is the correlation of the bank return with the return on the bank industry index for the US<br />

using weekly data from January 2005 to December 2006.<br />

Correlation with (1 - equity (1 – total Short-term<br />

US capital)/assets deposits)/assets debt/assets<br />

Quartile 1 -.44% -.55% -.36% -.47%<br />

Quartile 2 -.62 -.63 -.56 -.49<br />

Quartile 3 -.64 -.62 -.66 -75<br />

Quartile 4 -1.14 -1.02 -1.21 -1.12<br />

66


Appendix<br />

Variable definitions and data sources<br />

The bank balance sheet data in IMF International Financial Statistics refer to Other Depository<br />

Corporations (defined as resident financial corporations (except the central bank) and quasicorporations<br />

that are mainly engaged in financial intermediation and that issue liabilities included in the<br />

national definition of broad money).<br />

Panel A: Country level variables<br />

Variable Description Source of data<br />

Raw crisis return Average percentage weekly return in the Datastream country banking<br />

period May 2007 to March 2009 sector stock indices<br />

Standardized crisis return Ratio of the raw crisis return to standard Datastream country banking<br />

deviation. Standard deviations (percent<br />

per week) calculated using weekly data<br />

for calendar years 2005-2006<br />

sector stock indices<br />

Correlation Correlation of bank industry with bank Datastream country banking<br />

industry index for the US using weekly<br />

data from January 2005 to December<br />

2006<br />

sector stock indices<br />

Bad weeks return Standardized return of the country index Datastream country banking<br />

in the 5% of weeks in 2005-2006 during<br />

which the US bank index had the worst<br />

returns.<br />

sector stock indices<br />

Capital ratio 1-Book Value(Equity + Sub debt)/Total IMF Global Financial Stability<br />

Assets<br />

Report<br />

Basel ratio 1-Basel risk-weighted capital ratio IMF Global Financial Stability<br />

Report<br />

Demand deposits 1-Demand deposits/Total assets IMF International Financial<br />

Statistics, country tables<br />

Time deposits 1-Time deposits/Total assets IMF International Financial<br />

Statistics, country tables<br />

Total deposits 1-Total deposits/Total assets IMF International Financial<br />

Statistics, country tables<br />

Bank claims/GDP Ratio of total bank assets to GDP IMF International Financial<br />

Statistics, country tables<br />

Foreign claims Ratio of total foreign claims held by IMF International Financial<br />

banks to total bank assets<br />

Statistics, country tables<br />

Derivatives Ratio of value of derivative positions to BIS Consolidated Banking<br />

total bank assets<br />

Statistics, Table 9C<br />

Anti-director rights Djankov et al’s index of self-dealing Djankov et al (2005), Table XII<br />

Self-dealing Djankov et al’s revised index of antidirectors’<br />

rights<br />

Djankov et al (2005), Table III<br />

Return in 2006 Average weekly share return in 2006 Datastream country banking<br />

sector stock indices<br />

GDP growth GDP growth in local currency 2001-2006 IMF International Financial<br />

Statistics<br />

67


Panel B: Individual bank variables<br />

Variable Description Source of data<br />

Raw crisis return As country variable measured for<br />

individual bank<br />

Datastream<br />

Standardized crisis return As country variable measured for Datastream<br />

Correlation<br />

individual bank<br />

As country variable measured for<br />

individual bank<br />

Datastream<br />

Bad weeks return As country variable measured for<br />

individual bank<br />

Datastream<br />

Equity ratio As country variable measured for<br />

individual bank<br />

Datastream<br />

Capital ratio As country variable measured for<br />

individual bank<br />

Osiris<br />

Basel ratio As country variable measured for Datastream, The Banker,<br />

individual bank<br />

company annual reports, and<br />

Osiris<br />

Demand deposits As country variable measured for<br />

individual bank<br />

Datastream<br />

Time deposits As country variable measured for<br />

individual bank<br />

Datastream<br />

Total deposits As country variable measured for<br />

individual bank<br />

Datastream<br />

Bank claims/GDP As country variable IMF International Financial<br />

Statistics, country tables<br />

Foreign claims As country variable IMF International Financial<br />

Statistics, country tables<br />

Short-term debt Short-Term Debt/Total Assets for<br />

individual bank<br />

Datastream<br />

Interbank loans Interbank Loans/Total Assets for<br />

individual bank<br />

Datastream<br />

Due other banks Interbank Liabilities/Total Assets for<br />

individual bank<br />

Osiris<br />

Mortgages Mortgage Loans/Total Assets for<br />

individual bank<br />

Datastream<br />

Governance CGQ governance index for individual<br />

bank<br />

Bloomberg<br />

Return in 2006 As country variable measured for<br />

individual bank<br />

Datastream<br />

GDP growth As country variable IMF International Financial<br />

Statistics<br />

68


SHORT WAVELENGTH FINANCE: WHAT, WHO AND WHY<br />

Dr. Nick Kondakis<br />

Kepler Asset Management, 100 Wall Street, New York, NY 10005<br />

nick@kondakis.com<br />

Despite the noble efforts on both sides of the Atlantic we are still under the dark spell of the<br />

Global Financial Crisis. One year since our last discussion here, we still have not solved any of<br />

the major problems that have caused the worst post-war crisis: The bad loans are still there,<br />

simply hiding by staying away from the mark-to-market accounting, waiting to surface again in<br />

the wake of further declining real estate markets. The too-big-to-fail financial institutions are<br />

growing both in size and numbers. The greatly anticipated financial services industry reform has<br />

been heavily diluted. The derivatives market is still growing happily unregulated. And the<br />

European credit crisis is getting markedly worse with the Greek situation deteriorating further<br />

and more Eurozone countries taking or preparing to take the same road as Greece.<br />

The extraordinary amounts of liquidity pumped into the financial system in order to prevent<br />

the next Great Depression from happening again have managed to support the world stock<br />

markets and create fresh asset bubbles in most commodity markets. In this environment, classic<br />

long term quantitative models geared to identify value/growth opportunities have lost a lot of<br />

their predictive ability and, in addition, the risks have grown significantly due to the expected<br />

increase in correlation between the various asset classes during the inevitable corrections.<br />

It is increasingly obvious that the field of quantitative investing is becoming more and more<br />

commoditized especially in the “wavelengths” longer than a few hours. Rigorous research has<br />

led us to identify predictable regimes in increasingly shorter scales. As the time base gets<br />

smaller, the factors entering the quantitative prediction models move away from the<br />

fundamentals and the long term price correlations to more technical ones like the market<br />

microstructure and the short-term correlations.<br />

These short wavelength regimes, mostly within minutes in an intraday session, are ideally<br />

suited for trading strategies by smaller firms and proprietary quantitative groups especially in the<br />

face of a possible escalation in crisis-related volatility. Their exploitation is becoming more<br />

possible as technology is following Moore’s Law: Ultra-low latency order-based data feeds can<br />

drive data-hungry quantitative models trading millions of shares a day with relatively small<br />

amounts of capital. A typical day can yield a few hundred Gigabytes of exchange feed data and<br />

the research capabilities have to be able to match these sizable samples. The processing power<br />

available has reduced decision times well below the one microsecond barrier and exchange order<br />

round trip times are routinely below a couple of hundred microseconds.<br />

For sure, the new environment has created new challenges in model development, technology<br />

management and human resources and is rapidly transforming the world of Quantitative Finance.<br />

69


ASSET ALLOCATION-PORTFOLIO<br />

MANAGEMENT<br />

70


ON THE PERFORMANCE OF A HYBRID GENETIC ALGORITHM: APPLICATION ON THE<br />

PORTFOLIO MANAGEMENT PROBLEM<br />

Vassilios Vassiliadis, Vassiliki Bafa, George Dounias, Management and Decision Engineering Laboratory, Department of<br />

Financial & Management Engineering, University of the Aegean, Greece<br />

Email: v.vassiliadis@fme.aegean.gr, v.bafa@yahoo.gr g.dounias@aegean.gr , http://fidelity.fme.aegean.gr/decision/<br />

Abstract. In this study, a hybrid intelligent scheme which combines a genetic algorithm with a numerical optimization technique is<br />

applied to a cardinality-constrained portfolio management problem. Specifically, the objective function aims at maximizing the<br />

Sortino Ratio with a constraint on tracking error volatility. What is more, results from the proposed algorithm are compared with other<br />

financial and intelligent heuristics, such as financial rule-of-thumbs and simulated annealing. In order to obtain a better insight on the<br />

hybrid’s behavior, out-of-sample results are shown. The contribution of this work is twofold. Firstly, some useful conclusions<br />

regarding the performance of the proposed hybrid algorithm are drawn, based on experimental simulations. Secondly, some basic<br />

points, based on the comparison between the proposed algorithm and the benchmark heuristics, are highlighted. Finally, concerning<br />

the cardinality-constrained optimization problem, financial implications are discussed in some extent.<br />

Keywords: Genetic Algorithm, Portfolio Optimization, financial heuristics, hybrid algorithm, evolutionary mechanisms<br />

1. Introduction<br />

Nowadays, the portfolio optimization problem is of crucial important, for many reasons. Firstly, there is a portion of<br />

investors who seek good investment opportunities such as investing to a number of stocks from a certain market,<br />

rather than investing to a single stock or the market as a whole. Investing to a single stock usually incurs high risk.<br />

On the other hand, investing to the stock market, considering as the portfolio defined by the market, incurs high<br />

transaction costs. Secondly, the portfolio optimization problem becomes even more complex, if multiple, and<br />

sometimes conflicting, objectives or many real-world constraints are considered. As a result, many investors aim at<br />

finding high-quality, near-optimum combinations of stocks, which satisfy certain objectives. In this point, it is worth<br />

mentioning that both the concept and notion of portfolio management problem were first introduced by Markowitz<br />

in his nominal work (Markowitz, 1952). However, in Markowitz’s portfolio selection problem, the objective is to<br />

minimize the portfolio’s risk, with a constraint to the portfolio’s expected return. Recently, other objectives and<br />

constraints, which correspond to the current investment needs, have been of great interest.<br />

A number of various methodologies have been applied to the portfolio optimization problem, which can be<br />

divided into two discrete phases. In phase one, a combination of assets has to be determined. In doing so, several<br />

heuristics techniques such as financial rules-of-thumb, metaheuristics algorithms such as Tabu Search and other<br />

intelligent techniques have been applied. After the combination of assets has been determined, the amount of capital<br />

invested in each of these assets (portfolio’s weights) has to be computed. Traditional methodologies from statistics<br />

and mathematics (e.g. non-linear programming algorithms) have been implemented in order to calculate the<br />

portfolio’s weights. However, one drawback of these techniques is that the possibility of getting trapped in local,<br />

low-quality, optimum areas is considerable. In order to overcome this obstacle, intelligent metaheuristics from the<br />

field of Artificial Intelligence (AI) may be used. AI comprises of several methodologies whose main characteristic is<br />

that each of them employs certain intelligent heuristics, mostly based on the way natural systems work and evolve,<br />

in order to solve problems from various domains such as industrial application, financial problems etc.<br />

In this study, a hybrid genetic algorithm is applied in order to solve a complex portfolio optimization problem.<br />

More specifically, a certain type genetic algorithm is used in order to find good combination of assets, and the LM<br />

algorithm is applied so as to find optimal weights for the portfolios. The objective is to maximize the financial ratio<br />

Sortino, which takes into account both the concept of expected return and risk. Moreover, there's a constraint, which<br />

refers to the tracking error volatility of the constructed portfolios. The objective function of the problem, as well as<br />

the constraint on the tracking error volatility, is non-linear. The main aim of this study is to highlight the<br />

effectiveness of the proposed hybrid scheme both in terms of solutions’ quality and computational effort required. In<br />

order to provide some useful insights regarding the performance of the genetic algorithm, some benchmark<br />

methodologies are applied to the problem at hand.<br />

72


This paper is organized as follows. In section 1, some introductory comments regarding both the methodology<br />

and the application domain are presented. In section 2, some basic studies, regarding the application of genetic<br />

algorithms in the portfolio optimization problem, are shown. In section 3, several methodological issues are<br />

discussed in brief. In section 4, the mathematical formulation of the problem is presented. In section 5, results from<br />

experimental simulations are provided. Also, a detailed analysis on these results is given. Finally, in section 6, some<br />

concluding remarks and future work directions are provided.<br />

2. Literature Review<br />

Several studies have been conducted regarding the application of genetic algorithms in the portfolio optimization<br />

problem. In this section, a selection of standard works in the field is presented in brief. In any case, this analysis is<br />

not considered to be exhaustive, but only representative of the application of genetic algorithms in portfolio<br />

optimization.<br />

In their study, [Branke, Scheckenbach, Stein, Deb & Schmeck 2009] proposed a hybrid scheme comprising of<br />

principles from evolutionary algorithms and the critical line algorithm (for parameter quadratic programming) with<br />

the aim of solving complex portfolio optimization problems with nonlinear constraints. The task of the MOEA is to<br />

define a set of subset in the solution space. Then, for each subset found, the critical line algorithm generates a set of<br />

optimal portfolios. Authors deal with the classical mean-variance optimization problem and their dataset comprised<br />

of Hang Seng 31, S&P 98, Nikkei 225 and S&P’s 500 markets. Cardinality constraint was set to 4 and 8 assets.<br />

Parameters for the evolutionary algorithms were: population size of 250 and 30 generations. Results are promising<br />

regarding the superiority of the proposed methodology.<br />

In their paper, [Maringer & Kelleler 2003] proposed a hybrid algorithm consisting of the Simulated Annealing<br />

algorithm and several mechanisms from Evolutionary Programming. The optimization problem was the classical<br />

mean-variance Markowitz’s approach. Cardinality constraint was set to 9 and 39, for two datasets, namely the<br />

FTSE100 and DAX30 indices. In order to find the optimum solution, the population size was set to 100 and the<br />

number of generations to 750. First tests led to promising results and supported the findings for the algorithm<br />

presented in their study.<br />

In another work, [Streichert, Ulmer & Zell 2003] dealt with the classical Markowitz portfolio optimization<br />

problem, using principles of evolutionary strategies. Specifically, two extensions to evolutionary algorithms are<br />

introduced. First, a problem specific representation of evolutionary algorithms is introduced. Second, in the previous<br />

algorithm, a local search mechanism is introduced in order to enhance the performance of the scheme. As far as the<br />

application domain is concerned, data from the Hang Seng 31 stock market are used. Cardinality was set to 2, 4 and<br />

6 assets, while the parameters of the evolutionary scheme were: population size of 500, tournament group size 8,<br />

crossover probability 1 and mutation probability 0,01.<br />

In their paper, [Chen & Hou 2006] applied a modified genetic algorithm, which has the ability to efficiently<br />

solve combinatorial optimization problems. The main characteristic of the algorithm is the specific combination<br />

encoding scheme and genetic operators, both of which are designed for solving combination optimization problems.<br />

This method is applied to the portfolio optimization problem with the aim of maximizing the portfolio’s expected<br />

return with a built-in constraint on the portfolio’s risk. Regarding the application domain, data from the Taiwan<br />

Stock Exchange were used. The parameters of the genetic algorithm were: population size of 200, crossover<br />

probability 1 and mutation 0,05. Cardinality constraint was set to 20 assets. The experimental results demonstrate<br />

the feasibility and effectiveness of the combination GA for the integer portfolio optimization problem.<br />

[Chen, Hou, Wu & Chang 2009] applied a specific type of genetic algorithms to the investment portfolio<br />

problem. The objective of the problem is to maximize portfolio’s expected return, under a constraint in the<br />

portfolio’s risk. Dataset comprised of stocks from the Dow Jones Industrial Average. The portfolios were formed<br />

using 9 months of historical data and they were adjusted every 3 months. The parameters of the genetic algorithm<br />

were: population size of 200, crossover probability 1, mutation probability 0,05 and number of generations 500.<br />

All in all, it can be seen that several types of genetic algorithms, from standard to more advanced, have been<br />

applied to the portfolio optimization problem. In most cases, the formulation of the problem refers to the classical<br />

73


Markowitz’s mean-variance optimization. However, there have been studies, which incorporated non-linear realworld<br />

constraints. The applicability and effectiveness of genetic algorithms in the portfolio optimization problem is<br />

quite obvious.<br />

3. Methodology<br />

In this study, a variant of the genetic algorithm is proposed, combined with a mathematical optimization tool, for<br />

solving the portfolio management problem. The standard genetic algorithm was firstly proposed by Holland<br />

[Holland 1992]. Their main characteristics lie in the concept of evolutionary process. More particularly, as in the<br />

real world, genetic algorithms apply the mechanisms of selection, crossover and mutation in order to evolve the<br />

members of a population, through a number of generations. The ultimate goal is to reach a population of goodquality.<br />

In order to assess the quality of each member of the population, the concept of fitness value is introduced.<br />

All in all, genetic algorithms are computationally simple procedures, though powerful in their search for the optimal<br />

solution. Moreover, they are not limited by any assumptions regarding the search space, thus enabling them to apply<br />

quite capable strategies.<br />

In practice, the genetic algorithm operates upon a number of possible solutions called population. Each member<br />

in the population is called chromosome, representing a solution to the problem. In this study, each combination of<br />

assets (portfolio), along with their corresponding weights, comprises a chromosome. Therefore, the population is a<br />

collection of the ‘best-so-far’ portfolios. At first, the population is randomly initialized. Then, for each generation<br />

(epoch) of the algorithm, the chromosomes evolve by combining with each other, using the crossover and mutation<br />

operators. As the number of generations increases, better quality solutions are kept in the population. However, due<br />

to the fact that the size of the solution space is vast, there is a considerable probability of stagnation, meaning that<br />

after a certain number of generations the evolution process may halt, i.e. the genetic algorithm is stuck to a local<br />

optima region. This could be a potential problem for the performance of the genetic algorithm. It is important to<br />

note, in this point, that the evaluation of each chromosome is based on the value of the objective function of the<br />

optimization problem at-hand (fitness value).<br />

In the previous paragraph, the concepts of evolutionary operators, i.e. selection, crossover and mutation, were<br />

mentioned. Crossover and mutation are applied to already selected (good) members of the population so as to<br />

produce ‘children’ with better solution-characteristics (assets/weights) and therefore better fitness value. However,<br />

the main point of focus in this study lies in the mechanism of selection, which refers to the procedure of picking<br />

members of the existing population in order to produce descendants (new portfolios). Particularly, the basic aim is<br />

the assessment of how the choice of a specific selection strategy affects the performance of the generic algorithm.<br />

Three alternative selection operators are implemented: a) selection of N-best members from the existing population,<br />

b) application of roulette-wheel process for selection, c) application of tournament selection process. According to<br />

the first method, only the best solutions from the existing population are considered for the evolution of the process.<br />

The second method (roulette wheel) can be described as follows: a probability of selection, based on the fitness<br />

value, is assigned to each member of the existing population. The mathematical formula which calculates this<br />

probability is:<br />

Finally, the third approach (tournament selection) works as follows: firstly, all members of the existing<br />

population are split into n-groups, randomly. Then, from each group, the best member is chosen.<br />

In order to determine the weights of portfolio’s assets found by the genetic algorithm, a local search non-linear<br />

programming technique, namely the Levenberg-Marquardt method, is applied. In this case, the solution space is<br />

continuous and has upper and lower limits, defined by the floor and ceiling constraints. For a given combination of<br />

assets, the aim of the Levenberg-Marquardt method is to find a vector of weights which minimizes the given<br />

objective function under certain constraints.<br />

74<br />

(1)


4. Portfolio Optimization Problem<br />

The portfolio optimization problem deals with finding the optimal combination of assets and their corresponding<br />

weights, as well. Harry M. Markowitz, with his seminal paper [Markowitz 1952], established a new framework for<br />

the study of portfolio optimization. In his classical problem, the objective of the investor is to minimize the<br />

portfolio’s risk, whereas imposing a tight constraint in portfolio’s expected return.<br />

Nowadays, more complex formulations of the portfolio optimization problem are tackled. Non-linear objectives<br />

and constraints are introduced to the classical formulation, in a way that real-world situations are reflected. The<br />

objective of the portfolio optimization problem is to maximize a financial ratio, namely the Sortino ratio [Kuhn<br />

2006]. Sortino ratio is based on the preliminary work of Sharpe (Sharpe ratio) [Sharpe 1994], who developed a<br />

reward-to-variability ratio. The main concept was to create a criterion that takes into consideration both assets’s<br />

expected return and volatility (risk). However, in recent years, investors started to adopt the concept of “bad<br />

volatility”, which considers returns below a certain threshold. Sortino ratio considers only the volatility of returns,<br />

which fall below a defined threshold.<br />

Also, in this work, there is a constraint on tracking error volatility, i.e. a measure of the deviation between the<br />

portfolio’s and benchmark’s returns, is imposed. This restriction refers to a passive portfolio management, namely<br />

index tracking, which aims at constructing a portfolio using assets from a (stock) market in a way that attempts to<br />

reproduce the performance of the market itself. Passive portfolio management, as a concept, is adopted by investors<br />

who believe that financial markets are efficient, i.e. it is impossible to consistently beat the market.<br />

The formulation of the financial optimization problem is presented below:<br />

s.t.<br />

where,<br />

E(rP), is the portfolio’s expected return<br />

rf, is the risk-free return<br />

θ0(rP), is the volatility of returns which fall below a certain threshold and equals<br />

wi, is the percentage of capital invested in the ith asset<br />

K, is the maximum number of assets contained in a portfolio (cardinality constraint)<br />

rB, is the benchmark’s daily return<br />

H, is the upper threshold for the tracking error volatility 2<br />

1 The constraint on tracking error volatility is incorporated into the objective function, using a penalty term (0.8)<br />

2 H equals 0,0080<br />

75<br />

(2)<br />

(3)<br />

(4)<br />

(5)<br />

(6) 1<br />

(7)


, is the probability density function of the portfolio’s returns. Assuming that portfolio’s returns follow a normal<br />

distribution, the probability density function can be defined as:<br />

3. Experimental results<br />

In order to extract some useful conclusions regarding the performance of the hybrid scheme, a number of<br />

independent simulations have been conducted. In each simulation, the configuration settings of both the algorithm<br />

and the problem have been properly adjusted.<br />

The dataset comprised of 93 daily returns, corresponding to the period 04/01/2010 – 29/05/2010, of 49 stocks of<br />

the FTSE/ASE40 Index. In this point, it has to be mentioned that all stocks of the Index have been taken into<br />

consideration (even those stocks corresponding to firms which have been excluded the Index). The reason for doing<br />

this is to eliminate the effect of survivorship bias 3 .<br />

In what follows, the configuration settings of both the hybrid algorithm and the portfolio management problem<br />

are presented (Table 1). As it can be seen, a range of values for the number of generations and the cardinality of the<br />

portfolio are used.<br />

Parameters for Genetic Algorithm<br />

Population 200<br />

Generations 20/30<br />

Crossover Probability 0,90<br />

Mutation Probability 0,35<br />

Number of best members for selection (for n-best members selection) 20<br />

Number of groups (for tournament selection) 20<br />

Number of members in each group (for tournament selection) 10<br />

Parameters for optimization problem<br />

[Floor Ceiling] – Constraint [-1 1]<br />

Cardinality 10/20<br />

Table 1. Configuration settings<br />

In table 2, the main findings of the simulations are presented. Specifically, percentiles of the distribution of the<br />

independent runs are presented. If the percentile of variable X is a in 0.95 confidence level, then there is a<br />

probability of 95% that X will get values larger than a. So, it is preferable for the percentiles to have large values,<br />

indicating a distribution that is shrunk as far right as possible. What is more, benchmark results from other heuristics<br />

are presented, in order to provide some means of comparison. The following financial heuristics have been used:<br />

constructing portfolios with a) maximum Sortino ratio, b) maximum cumulative return, c) maximum Sharpe ratio, d)<br />

maximum expected return with a penalty term in tracking error volatility and e) random choice of assets. The<br />

weights are calculated applying the LMA method.<br />

Based on the results, the following basic points can be pointed out. First of all, the roulette wheel process yields<br />

the worst results in all cases. This mechanism lies in the random selection process in a great extend. Thus, somebody<br />

could state that the selection of members from the existing population is implemented through a ‘pure’ random<br />

process. On the other hand, it seems that the selection of N-best members for re-production is the best selection<br />

operator. Preliminary results indicate that although this mechanism is biased to pick good members from the<br />

population, it does not stuck to local optima regions, compared to the alternative mechanisms. Distribution of<br />

simulation results is denser and is located more to the right. Also, we could remark that the hybrid scheme outperforms<br />

the financial rules. As far as the financial implications are considered, it is quite obvious that portfolios<br />

with many assets yielded better results, compared to low cardinality portfolios.<br />

3 Tendency for failed companies to be excluded from performance indices mainly because they no longer exist. This effect often causes the<br />

results of the studies to skew higher because only companies which were successful enough to survive until the end of the time period of the<br />

study are included.<br />

76


Percentiles 0,05 0,50 0,95<br />

Hybrid Algorithm<br />

Cardinality: 10<br />

Generations: 20<br />

N-best 1,9424 2,2061 2,5134<br />

Roulette Wheel 1,3118 1,6167 1,9677<br />

Tournament Selection 1,8482 2,1742 2,5935<br />

Generations: 30<br />

N-best 1,9685 2,4310 2,6940<br />

Roulette Wheel 1,5755 1,7760 2,1270<br />

Tournament Selection 1,9720 2,3090 2,7317<br />

Cardinality: 20<br />

Generations: 20<br />

N-best 2,2554 2,6970 3,3285<br />

Roulette Wheel 1,8660 2,1488 2,4879<br />

Tournament Selection<br />

Generations: 30<br />

2,1896 2,6218 3,0311<br />

N-best 2,4437 2,8810 3,4704<br />

Roulette Wheel 2,0992 2,3512 2,7197<br />

Tournament Selection 2,3770 2,7617 3,3117<br />

Financial Rules Best solution<br />

Cardinality: 10 Cardinality: 20<br />

Sortino ratio 0,6272 0,8068<br />

Cumulative return 0,5909 0,7795<br />

Sharpe ratio 0,5515 0,4784<br />

Expected return 4<br />

0,7136 0,7747<br />

Random asset selection 0,3024 1,6182<br />

Table 2. Simulation results<br />

In table 3, bootstrapping is applied to the original dataset with the aim of producing a number of different<br />

scenarios. The reason for that is to examine the performance of both the hybrid algorithm and the financial heuristics<br />

in ‘unknown data’, by providing statistics of the distribution of results 5 [Gilli & Winker 2008].<br />

In order to apply the specific sampling method, we considered that the cardinality constraint defines, in a way, a<br />

unique optimization problem, i.e. results from the 10-asset problem cannot be compared directly to the 20-asset<br />

problem (where the distribution of results is more acceptable). Also, for each cardinality the ‘global’-best portfolio<br />

found in the hybrid scheme is used for implementing the resampling technique. In essence, 500-scenarios for the<br />

stocks returns-series were produced based on the original dataset. By doing this, the unique characteristics of the<br />

original dataset’s distribution are kept. Then, for each scenario, the best portfolio was applied and a value for the<br />

objective function was calculated. As a final result, the percentiles of the objective function’s distribution are<br />

presented in each case. For the 10-asset portfolio optimization problem, the ‘global’-best portfolio was found when<br />

the number of generations was set to 30 and tournament selection was applied. For the 20-asset portfolio<br />

optimization problem, the hybrid scheme yielded the ‘best’ results in 20 generations and when selection of N-best<br />

members from population was applied. Results indicate that the hybrid scheme out-performs the financial rules in<br />

‘unknown’ data.<br />

Percentiles 0,05 0,50 0,95<br />

Cardinality 10 20 10 20 10 20<br />

Hybrid Algorithm 2,1161 2,3176 2,9547 3,7487 4,0191 6,7839<br />

Sortino ratio 0,1754 0,3885 0,6201 0,8068 1,0506 1,2000<br />

Cumulative return 0,1970 0,4338 0,5923 0,7820 1,0365 1,3816<br />

Sharpe ratio 0,1840 0,2719 0,5425 0,4784 0,9809 0,7187<br />

Expected return 0,3673 0,4086 0,7050 0,7747 1,1259 1,2258<br />

Random asset selection 0,0152 1,0740 0,3105 1,6182 0,6277 2,1613<br />

Table 3. Results from bootstrapping<br />

4 Tracking error volatility constraint included<br />

5 By applying bootstrapping in the original dataset, the produced scenarios of returns retain the properties of the original dataset.<br />

This is a more ‘fair’ approach than applying the portfolios found to another dataset (there is not any forecasting component in the<br />

proposed methodology).<br />

77


4. Conclusions<br />

In this study, the performance of a hybrid intelligent scheme was analyzed. More specifically, the proposed<br />

technique comprised a genetic algorithm and the LMA algorithm. This hybrid scheme was applied to a portfolio<br />

optimization problem. The GA component aimed at finding high-quality combination of assets, whereas the LMA<br />

algorithm computed the optimal weights. This paper focused on the application of alternative mechanisms for<br />

selection, which is a main component of the genetic algorithm. Three different mechanisms were applied: N-best<br />

members, roulette wheel and tournament selection. The selection operator plays a vital role in the process of<br />

portfolio construction.<br />

Results from experimental simulations indicate that the application of the N-best mechanism is acceptable,<br />

compared to the other two strategies. In order to apply the specific mechanism, the only thing to be considered is the<br />

N-best portfolios in each generation. The important matter is that no random selection component is implemented.<br />

So, there is a ‘clear’ bias towards the best solutions. However, this is not always the case, because in several studies<br />

the issue of getting stuck in local optima regions, when considering only the best-so-far solutions, hinders the<br />

optimization process. Another point of this study is that the hybrid intelligent scheme out-performed the financial<br />

rules implemented, in all cases. Nevertheless, in order to assess the performance of the proposed techniques in<br />

‘unknown’ data, we applied a method of resampling from the original dataset so as to produce new scenarios of the<br />

stocks’ returns-series. Afterwards, the best portfolios were applied to the produced data series, and the distributions<br />

of results were calculated. The hybrid scheme yielded better results, compared to the financial heuristics, in this case<br />

also.<br />

However, in order to obtain some better insight regarding the overall performance of the hybrid scheme, the<br />

following basic future research directions are proposed. First of all, the proposed hybrid intelligent scheme should<br />

be compared with other hybrid intelligent algorithms, whose main characteristic (and advantage) is the application<br />

of good searching strategies. Complex search strategies could be able to capture any trends and patterns in the<br />

dataset. Another interesting direction is the implementation of an intelligent trading system, whose main<br />

components could be: a) a hybrid nature-inspired algorithm for portfolio optimization, b) a set of intelligent trading<br />

rules (for application in the validation phase) and c) re-balancing of the portfolio in specific (maybe pre-determined)<br />

time intervals.<br />

5. References<br />

Branke, J., Scheckenbach, B., Stein, M., Deb, K. & Schmeck, H., Schmeck, H. 2009, ‘Portfolio Optimization with<br />

an Envelope-based Multi-objective Evolutionary Algorithm’, European Journal of Operational Research, pp.<br />

684-693<br />

Chen, J. S. & Hou, J. L. 2006, ‘A Combination Genetic Algorithm with Applications on Portfolio Optimization’, In<br />

Advances in Applied Artificial Intelligence, Berlin, Germany, vol. 4031, pp. 197-206<br />

Chen, J. S., Hou, J. L., Wu, S. M. & Chang, C. Y. W. 2009, ’Constructing investment strategy portfolio by<br />

combination genetic algorithms’, Expert Systems with Applications, vol. 36, pp. 3824-3828<br />

Gilli, M. & Winker, P. 2008, ‘A review of heuristic optimization methods in econometrics, Swiss Finance Institute<br />

Research Paper, no. 08-12,<br />

Holland, J. H. 1992, ‘Genetic Algorithms’, Scientific American, pp. 66-72<br />

Jeurissen, R. 2005, ‘A hybrid genetic algorithm to track the Dutch AEX-Index’, Bachelor thesis, Informatics &<br />

Economics, Faculty of Economics, Erasmus University of Rotterdam<br />

Kuhn, J. 2006, ‘Optimal risk-return tradeoffs of commercial banks and the suitability of probability measures for<br />

loan portfolios’, Springer Berlin<br />

Maringer, D. & Kellerer, H. 2003, ‘Optimization of Cardinality Constrained Portfolios with a Hybrid Local Search<br />

Algorithm’, OR Spectrum, vol. 25, no. 4, pp. 481-495<br />

Markowitz, H. 1952, ‘Portfolio Selection’, The Journal of Finance, vol. 7, no. 1, pp. 77-91<br />

Sharpe, W. F. 1994, ‘The Sharpe ratio’, Journal of Portfolio Management, pp. 49-58<br />

Streichert, F., Ulmer, H. & Zell, A. 2003, ‘Evolutionary Algorithms and Cardinality Constrained Portfolio<br />

Optimization Problem’, Selected papers of the International Conference on Operations Research, Heidelberg,<br />

pp. 3-5<br />

78


ASSET PRICING<br />

80


ASSET PRICING IN THE CYPRUS STOCK EXCHANGE: ARE STOCKS FAIRLY PRICED?<br />

Haritini Tsangari & Maria Elfani, University of Nicosia, Cyprus<br />

Email: tsangari.h@unic.ac.cy<br />

Abstract. The purpose of the current research is to examine the efficiency of the Cyprus Stock Exchange (CSE) during the years<br />

2002-2007. This is the time period after the big crash of the CSE, when the index rose more than 700% from 1 st January 1999 to 1 st<br />

December 1999 and fell more than 80% on 30 th September 2001. It is also the period that marks the beginning of the liberalization of<br />

interest rates in Cyprus, which started in 2002. Asset pricing is performed for 30 stocks listed on the CSE, using a two-pass regression<br />

methodology. The FTSE/CySE 20 Index is used as a market proxy and 13-week Cyprus T-bills are used as the risk-free asset. Results<br />

from the first-pass regression showed that 27% of the stocks had a beta coefficient higher than 1, and 62.5% out of these stocks were<br />

from the financial industry. Results from the second-pass regression showed that, overall, the stocks are overvalued. Moreover, the<br />

regression coefficient for beta is not equal to the average market risk premium and there is no relationship between stock risk premium<br />

and beta. An extension of the second-pass regression tested the hypothesis that non-systematic risk should not explain the risk<br />

premium. The results indicated that the hypothesis was not supported. Conclusions and recommendations are made accordingly.<br />

Keywords: Cyprus Stock Exchange; Two-step regression; Asset pricing; Efficiency.<br />

JEL classification: G12<br />

1 Introduction<br />

The Cyprus Stock Exchange (CSE) is an example of an emerging market whose short but turbulent history makes it<br />

interesting to study. The CSE started its operation officially in 1996. Although the conditions appeared to be<br />

promising, CSE experienced a big boom in 1999 and a huge crash in the next two years (see Tsangari, 2011). Many<br />

people, mostly small investors, saw their lifetime savings vanish into the air, while the crash was accompanied by a<br />

‘scandal’ and a number of violations and breaches. The current study will examine the efficiency of the CSE for the<br />

years 2002-2007. This period was intentionally selected for many reasons, the main reason being that it is the time<br />

period right after the big crash of the CSE and thus marks a new era for the market.<br />

The traditional Capital Asset Pricing Model (Sharpe, 1964; Lintner, 1965; Mossin, 1966) will be used, in<br />

a first attempt to perform the necessary tests for the Cyprus stock market during the period of interest. More<br />

than four decades after its development, the CAPM is still widely used both by academics and practitioners, with its<br />

attraction being that it offers powerful and intuitively pleasing predictions about how to measure risk and its relation<br />

with expected return (Fama & French, 2004). A two-stage regression methodology will be employed, where in the<br />

first-pass time series regression the betas of the stocks will be estimated, and in the second-pass regression the betas<br />

will be used to test the risk-return relationship (Miller & Scholes, 1972). If the relevant market proves to be<br />

efficient, investors will benefit from the fair stock prices with all available information fully reflected in them.<br />

2 Asset Pricing in the Cyprus Stock Exchange<br />

The Capital Asset Pricing Model (CAPM) provides a testable prediction about the relation between risk and<br />

expected return by identifying a portfolio that must be efficient if asset prices are to clear the market of all assets<br />

(Fama & French, 2004). In an efficient market all available information is fully reflected on the stock price,<br />

providing its fair price, ensuring that it is neither underpriced nor overpriced.<br />

2.1 The Capital Asset Pricing Model<br />

CAPM builds on the portfolio theory developed by Harry Markowitz (Markowitz, 1952). The model assumes that<br />

all investors are risk averse and when making investment decisions they only care about the mean and variance of<br />

their one-period investment return, minimizing the variance given the expected return and maximizing the expected<br />

return given the investment variance. Sharpe (1964) and Lintner (1965) add two key assumptions to the Markowitz<br />

model to identify a portfolio that must be mean-variance-efficient. The first assumption is complete agreement:<br />

82


given market clearing asset prices at time t-1, investors agree on the joint distribution of asset returns from time t-1<br />

to time t, which is the distribution from which the returns we use to test the model are drawn. The second<br />

assumption is that there is borrowing and lending at a risk-free rate, which is the same for all investors and does not<br />

depend on the amount borrowed or lent. The familiar Sharpe-Lintner CAPM equation is given by:<br />

E(Ri)�Rf�[E(RM)�Rf)]βi, i=1,…,N. (1)<br />

In equation (1), βi is the market beta of asset i, is the asset’s systematic risk, given by the covariance of its return<br />

with the market return divided by the variance of the market return, RM,<br />

Cov(<br />

Ri<br />

, R<br />

� i �<br />

2<br />

� ( R )<br />

The beta of the fully diversified market is equal to 1. Therefore if the beta of the stock is more than 1, it means<br />

that the stock is more volatile than the market, while if beta is less than 1 it means that the stock has lower risk. In<br />

words, the CAPM in equation (1) states that the expected return on any asset i is the risk-free interest rate, Rf, plus a<br />

risk premium, which is the asset’s market beta, ßi, times the premium per unit of beta risk or expected excess return<br />

on the market, E(RM) - Rf. Similarly, the model predicts that the assets plot along a straight line, with an intercept<br />

equal to Rf and a slope equal to E(RM) - Rf.<br />

Unrestricted riskfree borrowing and lending is considered an unrealistic assumption. Black (1972) develops a<br />

version of the CAPM without riskfree borrowing or lending. He shows that the CAPM’s key result – that the market<br />

portfolio is mean-variance-efficient - can be obtained by instead allowing unrestricted short sales of risky assets<br />

(which, however, can also be considered to be an unrealistic assumption). The Black version says only that E(RzM)<br />

must be less than the expected market return, so the premium for beta is positive. In contrast, in the Sharpe-Lintner<br />

version of the model, E(RzM) must be the riskfree interest rate, Rf, and the premium per unit of beta risk is E(RM) -<br />

Rf. The success of the Black version of the CAPM in early tests produced a consensus that the model is a good<br />

description of expected returns. These early results, coupled with the model’s simplicity and intuitive appeal, pushed<br />

the CAPM to the forefront of finance (Fama & French, 2004).<br />

The Sharpe-Lintner and Black versions of the CAPM share the prediction that the market portfolio is meanvariance-efficient.<br />

This implies that differences in expected return across securities are entirely explained by<br />

differences in market beta; other variables should add nothing to the explanation of expected return. It can be argued<br />

that the efficiency of the market portfolio is based on many unrealistic assumptions, but, as Fama & French (2004)<br />

stress, all interesting models involve unrealistic simplifications, which is why they must be tested against data. The<br />

hypothesis that market betas completely explain expected returns can be tested using cross-section regressions (e.g.<br />

Fama & MacBeth, 1973), or using time-series regressions (e.g. Gibbons, 1982; Stambaugh, 1982). Fama & French<br />

(2004) look at both time-series and cross-section regressions from a different point of view: not as a strict test of the<br />

CAPM, but as test of whether a specific proxy for the market portfolio is efficient in the set of portfolios that can be<br />

constructed from it and the left-hand-side assets used in the test, so they can be considered as tests of efficiency.<br />

Some of the recent cases of empirical tests which have shown consistency with the CAPM model, finding<br />

significant relationships between beta and returns include Fletcher (1997) (UK stock market for the period 1975-<br />

1994), Hodoshima, Garza-Gomez, & Kunimura (2000) (Tokyo Stock Exchange for the period 1956-1995), Shakrani<br />

& Ismail (2001) (Malaysian Islamic unit trust for the period 1999-2001), Sandoval & Saens (2004) (Argentina,<br />

Brazil, Chile and Mexico for the period 1995-2002), Tang & Shum (2004) (Singapore market), and Gursoy &<br />

Rejepova (2007) (Turkish Stock Exchange).<br />

2.2 The two-step regression methodology<br />

The current study will follow the two-step regression methodology used in the early CAPM tests (e.g. Miller &<br />

Scholes, 1972). The first-pass regression is given by the equation:<br />

Rit�Rft�ai�bi(RMt�Rft)�eit<br />

83<br />

M<br />

M<br />

)<br />

(2)


where Rit – Rft is the monthly risk premium of stock i, bi is the sample estimate of the beta coefficient of each stock,<br />

RMt – Rft is the monthly risk premium of the market index over the sample period (t=1,…T, where T=total number of<br />

months under examination) and eit is the monthly residual for stock i. Therefore, in the first pass regression the beta<br />

of the stock is estimated using time series regression, where each of the risk premiums is regressed on the market<br />

risk premium.<br />

The Sharpe-Lintner CAPM says that the average value of an asset’s excess return (Rit - Rft) is completely<br />

explained by its average realized CAPM risk premium (its beta times the average value of (RMt - Rft)). This implies<br />

that “Jensen’s alpha,” the intercept term in the time-series regression of equation (2) is zero for each asset. This can<br />

be seen by rearranging equation (1), by moving Rf to the left side; the CAPM assumes that the intercept alpha is<br />

equal to zero. If alpha is zero, then the asset is exactly on the Security Market Line (SML), the line which measures<br />

the linear relationship between risk (beta) and expected return (E(Ri)), or that the asset is fairly priced. Similarly, if<br />

alpha is positive then the asset is plotted above the SML or the asset is undervalued and if alpha is negative, then the<br />

asset is plotted under the SML or it is overvalued.<br />

The output of the first-pass regression, the estimated betas, will then be used as input into the second-pass<br />

regression. The second-pass regression is given by:<br />

i<br />

f<br />

R � R = γ0 + γ1bi , i=1,………n. (3)<br />

i<br />

f<br />

R R � are the sample averages (over all months) of the excess return on each of the n stocks, which are regressed<br />

on bi, the estimated beta coefficient of each asset, in order to determine the relationship between risk (beta) and<br />

return. Therefore, if the CAPM is valid, then the intercept, γ0 (alpha) should be zero and the coefficient γ1 should<br />

R R � , which is the sample average of the excess return of the market index.<br />

equal M f<br />

In the current article, the second-pass regression model will, be further expanded, with the variances of the n<br />

residuals, σ²(ei), as an additional independent variable. The purpose of this expansion is to test if the expected excess<br />

return on assets is determined only by the systematic risk, as measured by beta, and is independent of the<br />

nonsystematic risk, as measured by the variance of the residuals. Notice that the variances of the residuals, σ²(ei), are<br />

estimated from the first-pass regression. The extended second-pass model is, thus, as follows:<br />

R R � = γ0 + γ1bi + γ2σ²(ei) (4)<br />

i<br />

f<br />

The extended regression equation (4) is estimated with the hypotheses γ0=0, γ1= M f R R � , γ2=0. The<br />

hypothesis that the coefficient γ2 should be zero is consistent with CAPM, which assumes that the risk premium<br />

depends only on beta and there is no other variable in the right-hand-side of the equation that is significant for the<br />

regression model. In other words, γ2=0 agrees with the notion that nonsystematic risk should not be “priced”, that is<br />

that there is no risk premium earned for bearing nonsystematic risk.<br />

2.3 The Cyprus Stock Exchange<br />

The Cyprus Stock Exchange (CSE) has been officially operating since 29 March 1996, and it currently has 122<br />

listed companies (Cyprus Stock Exchange, 2011). CSE is supervised by the Ministry of Finance. During its early<br />

years of operation CSE experienced a big boom followed by a crash: the market rose strongly in 1999, when its<br />

main index had an up to 700% increase, from 97 points on 1 January 1999 to 852 points on 1 December 1999, but<br />

fell sharply after that, by more than 80%, to 103 points on 30 September 2001 (see Tsangari, 2011). The crash was<br />

accompanied by a ‘scandal’ and a number of violations and breaches.<br />

The current study examines the market efficiency for the years 2002-2007, the time period right after<br />

the big crash. This period marks a new era for CSE. First, a fully automated online settlement and<br />

clearing system was launched in 2002, providing high liquidity and marketability, thus reducing the risk of<br />

84


the market being manipulated by few players. In 2002 the Ministry of Finance in its efforts to address the<br />

problem of CSE decided to redesign the institutional and legal framework with the view to address the<br />

challenge of integration into the unified financial market of the European Union (EU) (Tsangari, 2011). It is<br />

also the period that marks the beginning of the liberalization of interest rates in Cyprus, which started in 2002. In<br />

addition, the CSE, within its strategic plans for 2004–2006, moved to an agreement with the Athens Stock<br />

Exchange for the development of a common platform between the two stock markets. When Cyprus<br />

joined EU on May 1 2004, the adjustments to the financial system and the operations of the Cyprus financial<br />

market began, making CSE an integral part of European financial system and the European Financial Market,<br />

making it one the of the most interesting emerging markets nowadays.<br />

2.3.1 Selection of CSE companies for the efficiency tests<br />

The companies listed in CSE are divided into different economic sectors, according to the FTSE Industry<br />

classification benchmark: Financials, Government Bonds, Corporate Bonds, Consumer Services, Industrials,<br />

Consumer Goods, Telecommunications, Technology and Basic Materials. Among these sectors, the financial sector<br />

represents 92.7% of the total volume of the all sectors (e.g. Cyprus Stock Exchange, 2009). More specifically, in the<br />

financial sector, Bank of Cyprus Public Company Ltd and Marfin Popular Bank Public Company Ltd comprise the<br />

largest market capitalization with approximately 64% of the total market capitalization (Cyprus Stock Exchange,<br />

2009). By the end of the period of interest there were about 140 listed companies, whereas the market capitalization<br />

of shares (excluding investment companies) reached 16.45 billion euros (Cyprus Stock Exchange, 2007).<br />

In the current study 30 companies were selected for inclusion in the efficiency tests. The companies were<br />

selected so that they represent all sectors of the Cyprus economy and give a representative picture of the market,<br />

especially in terms of market capitalization. The selected companies have been listed in the Cyprus Stock Exchange<br />

since its official opening in March 1996. All the historical data have been collected from the official website of the<br />

Cyprus Stock Exchange. The stock return (Ri) will be calculated as (Pt – Pt-1)/Pt-1 where Pt is the closing price of the<br />

stock on day t. The monthly return will be calculated using the average of the daily returns in each month.<br />

2.3.2 Selection of market index used as market proxy for CAPM tests<br />

The CSE began to cooperate with the London Stock Exchange and Financial Times to establish the FTSE/CySE 20<br />

index in November 2000. The purpose of FTSE/CySE 20 Index is “to provide a real time measure of the Cyprus<br />

Stock Market on which index-linked derivatives could be traded” (FTSE, 2009, p.3). The FTSE/CySE 20 Index is<br />

managed by the FTSE International Limited, the Cyprus Stock Exchange (CSE), and the FTSE/CySE Index<br />

Advisory Committee. The eligible securities to include in FTSE/CySE 20 Index are the 20 largest securities valued<br />

by their market capitalization after passing the investibility screens. According to the Ground Rules, the investibility<br />

screens consist of four criteria: first, 20% of the stock’s shares must be publicly available for investment (free float),<br />

in other words they should not be in hand of a single party or parties; second, the securities must be accurately and<br />

reliably priced so that the market value of the company should be determined satisfactorily; third, the securities<br />

should be traded at least 50% of the business day during the six calendar months; fourth, the securities must have<br />

traded for at least 20 trading days before the review collection date takes place. Based on these eligibility criteria,<br />

the authors have decided to use the FTSE/CySE 20 Index as the market proxy to test the CAPM in CSE. According<br />

to the Ground Rules, the FTSE/CySE 20 index is calculated by using the formula:<br />

n<br />

�<br />

i�1<br />

X W A<br />

where Xi is the latest trading price of the i th component security, n is the total number of securities in the index, Wi is<br />

the weight of the i th component security (or number of ordinary shares issued by the company), Ai is the free float<br />

percentage of capitalization available to all investors, and the divisor, d, is the total issued share capital of the index.<br />

The performance of the FTSE/CySE 20 Index for the years 2002 to 2007 is shown in Figure 1.<br />

i<br />

85<br />

d<br />

i<br />

i<br />

,


Figure 1. The performance of the market index for the time period under investigation (2002-2007).<br />

In the CAPM tests, the market return (RM), will be calculated using the market proxy, FTSE/CySE 20 index.<br />

More specifically, RM will be calculated as (Mt – Mt-1)/Mt-1 where Mt is the value of the FTSE/CySE 20 index in<br />

month t. All the historical data have been collected from the official website of the Cyprus Stock Exchange.<br />

2.3.3 Selection of risk-free instruments for CAPM tests<br />

The Central Bank of Cyprus issues short-term risk-free instruments, namely Treasury Bills (T-bills), 13-week and<br />

52-week. Both types of T-bills are issued to the public via bid-price auction. The 13-week T-bills were first<br />

auctioned publicly in 1996, while the 52-week T-bills were publicly auctioned in 1998. Only the 52-week T-bills are<br />

listed in the CSE so that the prices are determined at the stock exchange floor. The 13-week T-bills which are not<br />

listed in the CSE are issued either by auction or at fixed prices that meet the government investment needs of the<br />

Social Security Fund (Stephanou & Vittas, 2006).<br />

The authors have decided to use the 13-week Cyprus T-bills to represent the risk free rate, Rf, in the efficiency<br />

tests. The reasons that 13-week T-bills were chosen over 52-week T-bills were, first, Eurostat reports the rate of 13week<br />

T-bills for Cyprus, and, second, the shorter-term nature of 13-week T-bills gives it the representative role of a<br />

risk free asset; similar to the use of 90-day T-bills in tests for the US market, shorter-term T-bills yields are more<br />

consistent with the CAPM as originally derived and reflect truly risk-free returns in the sense that T-bill investors<br />

avoid material loss in value from interest rate movements (Bruner, Eades, & Schill, 2010). Monthly T-bill rates are<br />

collected for the years 2002 to 2007 for inclusion in our tests. It should be stressed that before 2002 the rate was<br />

constant at 9%, and, therefore, data are collected after the liberalization of interest rates. All the historical data have<br />

been collected from the official websites of the Central Bank of Cyprus.<br />

3 Results<br />

3.1 First-Pass Regression<br />

The sample included 30 stocks from the Cyprus Stock Exchange. Results from the first-pass regression of equation<br />

(2) showed that 29 companies have a significant beta coefficient (p-value


overall results, and especially the fact that both the lowest and highest beta belong to companies in the Financial<br />

services, show that there is no specific pattern in terms of the level of systematic risk in relation to the industry<br />

associated with it. The first-pass regression also produced the residuals, the variance of which will be later used as<br />

input in the extended second-pass regression.<br />

3.2 Second-Pass Regression<br />

In the second-pass regression of equation (3) the sample averages of the excess return of the stocks (averages are<br />

over the 72 monthly observations, for the years 2002-2007) were regressed on the estimated beta coefficients of the<br />

29 stocks, taken from the first-pass regression. The results appear in Table 1.<br />

Independent Variable (coefficient) B Standard Error t p-value<br />

Intercept (γ0) -0.038 0.001 -35.651


eta, and high-beta stocks have (on average) yielded lower returns than they should have on the basis of their beta.<br />

Although the observed premium per unit of beta is lower than the Sharpe-Lintner model predicts, the relation<br />

between average return and beta is roughly linear, consistent with the Black version of the CAPM, which predicts<br />

only that the beta premium is positive. Third, equation (4) assumes that the nonsystematic risk, represented by the<br />

coefficient for variance of residuals (γ2), should equal zero, i.e. it should not explain the risk premium of the stock.<br />

However, in our findings γ2 is equal to 3.027, and it is significantly different from zero (p-value=0.020


Bruner, R. F, Eades, K. M., & Schill, M. J. (2010). Case studies in Finance: managing for corporate value creation.<br />

6 th edition, New York: McGraw Hill.<br />

Cyprus Stock Exchange (2007). CSE Bulletin, March 2007, Issue 122.<br />

Cyprus Stock Exchange (2009). CSE Bulletin, May 2009, Issue 148.<br />

Cyprus Stock Exchange (2011). CSE Bulletin, March 2011, Issue 170.<br />

Daniel, K., & Titman, S. (1997). Evidence on the characteristics of cross sectional variation in stock returns. Journal<br />

of Finance, 52, 1-33.<br />

Fama, E. F., & French, K. R. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial<br />

Economics, 33, 3-56.<br />

Fama, E. F., & French, K. R. (2004). The Capital Asset Pricing Model: theory and evidence. Journal of Economic<br />

Perspectives, 18(3), 25-46.<br />

Fama, E. F. ,& MacBeth, J. D. (1973). Risk, return and equilibrium: empirical tests. Journal of Political Economy,<br />

81(3), 607-636.<br />

Fletcher, J. (1977). An examination of the cross-sectional relationship of beta and return: UK evidence. Journal of<br />

Economics and Business, 49, 211-221.<br />

FTSE (2009). Ground Rules for the Management of the FTSE/CySE 20 Index, Version 2.0, November 2009.<br />

Gibbons, M. R. (1982). Multivariate tests of financial models: a new approach. Journal of Financial Economics,<br />

10(1), 3-27.<br />

Gursoy, C. T., & Rejepova, G. (2007). Test of Capital Asset Pricing Model in Turkey. Dogus University Journal, 8,<br />

47-58.<br />

Hodoshima, J., Garza-Gomez, X., & Kunimura, M. (2000). Cross-sectional regression analysis of return and beta in<br />

Japan. Journal of Economics and Business, 52, 515-533.<br />

Lewellen, J. (2002). Momentum and autocorrelation in stock returns. Review of Financial Studies, 15, 533-563.<br />

Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in stock portfolios and capital<br />

budgets. Review of Economics and Statistics, 47(1), 13-37.<br />

Markowitz, H. M. (1952). Portfolio Selection, Journal of Finance, 7(1), 77-91.<br />

Miller, M. & Scholes, M. (1972). Rate of return in relation to risk: a reexamination of some recent findings. In M. C.<br />

Jensen (Ed.), Studies in the Theory of Capital Markets (pp. 47-78). New York: Praeger.<br />

Mossin, J. (1966). Equilibrium in a capital asset market. Econometrica, 34(4), 768-783.<br />

Sakhrani, M.S., & Ismail, A.G. (2001). The conditional CAPM and cross-sectional evidence of return and beta for<br />

Islamic Unit Trust in Malaysia. Bangkel Economi, Universiti Kebangsaan Malaysia, Bangi.<br />

Sandoval, E.A., & Saens, R. N. (2004). The conditional relationship between portfolio beta and return: evidence<br />

from Latin America. Cuadernos de Economia, 41, 65-89.<br />

Sharpe, W. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of<br />

Finance, 19(3), 425-442.<br />

Stambaugh, R. F. (1982). On the exclusion of assets from tests of the two-parameter model: a sensitivity analysis.<br />

Journal of Financial Economics, 10(3), 237-268.<br />

Stephanou, C., & Vittas, D. (2006). Public debt management and debt market development in Cyprus: Evolution,<br />

current challenges and policy option. Economic Policy Papers, No. 12-06.<br />

Tang, G.Y.N., & Shum, W. C. (2004). The risk-return relations in the Singapore Stock Market, Pacific Basin<br />

Finance Journal, 12, 179-195.<br />

Tsangari, H. (2011). Emerging markets investment opportunities: Cypriot investors and Russian mutual<br />

funds. International Journal of Business and Emerging Markets, 3(1), 89-106.<br />

Zhang, L. (2005). The value premium. Journal of Finance, 60, 67-103.<br />

89


THE VALUATION OF EQUITIES WHEN SHAREHOLDERS ENJOY LIMITED LIABILITY<br />

Jo Wells, Bangor Business School, UK<br />

Email: j.wells@bangor.ac.uk<br />

Abstract. This paper introduces an alternative approach to the valuation of shares where shareholders enjoy limited liability. Limited<br />

liability for shareholders is a relatively recent phenomenon, and insights into appropriate valuation methods can be drawn by first<br />

considering the case where shares carry unlimited liability. This leads to the argument that rather than the commonly adopted<br />

approach of directly modelling the share price using geometric Brownian motion, it is more appropriate for the underlying<br />

shareholders' equity to be considered as the stochastic variable in the modelling exercise, with a birth and death model chosen as the<br />

underlying stochastic process. The limited liability condition is imposed at a later stage in the valuation framework. These arguments<br />

lead to the derivation of the value of the market value of equity for a non dividend-paying firm as a perpetual American-style call<br />

option on shareholders' equity with a strike price of zero. The pricing of such an option is considered and potential empirical<br />

applications discussed.<br />

Keywords: equity valuation, perpetual options<br />

JEL classification: G12, G13<br />

1 Introduction<br />

Traditional approaches to equity valuation involve the calculation of the present value of future earnings or cash<br />

flows using an appropriate discount rate which takes into account the risk associated with those earnings or cash<br />

flows (Gordon, 1959; Basu, 1977). This paper considers the fundamental nature of equity investment in the presence<br />

of limited liability. The introduction of limited liability changes the nature of a share from a linear investment to a<br />

non-linear option-type instrument. Within this framework, a share can be viewed as a perpetual American-style call<br />

option on shareholders' equity with a strike price of zero. A consideration of the pricing of such an option leads to a<br />

number of empirically testable hypotheses.<br />

The paper proceeds as follows: Section 2 explains how in the presence of limited liability, a share can be<br />

viewed as a perpetual call option on shareholders' equity with strike price of zero. Section 3 discusses the pricing of<br />

such an option for an all-equity firm which pays no dividends. Section 4 presents the qualitative results produced by<br />

this approach for the case of the non dividend-paying all-equity firm. Section 5 goes on to consider the impact of<br />

leverage and dividends on the results presented in Section 4. Finally, Section 6 concludes.<br />

2 Shares as Call Options<br />

The key observation underpinning this paper is that limited liability changes the nature of the equity in a firm<br />

from a linear investment to an option-type contract. This observation has previously been made, but to the author's<br />

knowledge, the full description of the option-like nature of share investment arising from limited liability is new to<br />

the literature. Merton (1974), for example, identifies a share as a call option on the assets of a firm with strike price<br />

equal to the current level of debt, but the Merton model assumes a fixed maturity and models the assets of the firm<br />

as a geometric Brownian motion. These two assumptions are questioned in the current paper.<br />

In general, we can think of a very simple stylized balance sheet comprising total assets A financed by debt D<br />

and shareholders' equity E. In the simple initial case, the firm pays no dividends and has no debt, so E = A. 1 Firstly,<br />

it is instructive to consider the nature of equity investment in the absence of limited liability. Such concepts are not<br />

as strange as they might at first seem. Limited liability is a relatively recent concept in many markets. In the UK, for<br />

example, limited liability was not freely available to firms until the Limited Liability Act of 1855 (see Hunt, 1936<br />

for a full discussion). Before that, only a small number of firms, typically those set up to undertake large utility<br />

projects offering a high degree of perceived public good, had been granted limited liability by Royal Charter or<br />

1 Throughout the paper, shareholders' equity is used to refer to the continuous variable measured by the book value of equity, which will be used<br />

to refer to the accounting value published annually in the balance sheet.<br />

90


Special Act of Parliament. Prior to the 1855 Act, therefore, stock market investment in the UK was a highly risky<br />

undertaking which could easily result in personal ruin.<br />

In the absence of limited liability and corporate insolvency, a share gives its owner a simple linear participation<br />

in the shareholders' equity on the firm's balance sheet. The value of shareholders' equity is unconstrained and can<br />

become negative. In such circumstances, shareholders may be called upon to inject further capital to support the firm<br />

(essentially a negative dividend).<br />

Shareholders in a firm have a fundamental right that is not generally highlighted in the literature. At any point in<br />

time, the shareholders can collectively choose to continue with the firm's operations or to close the firm, in which<br />

case they walk away with the current value of the shareholders' equity. The shareholders in a non dividend-paying<br />

firm receive no cash flows from the firm but the value of their shares is expected to grow over time (and they can<br />

sell shares to generate income as required, in the spirit of Miller & Modigliani, 1961).<br />

Without limited liability, shareholders have this right but when shareholders' equity is negative, they would be<br />

in a position of having the choice of paying into the company to make good the entire deficit in order to close it, or<br />

continuing with operations (and possibly having to pay in a certain amount to support them). The shareholders'<br />

choice of whether to close or continue with the investment does not, therefore, constitute an option contract.<br />

The introduction of limited liability changes the nature of equity investment completely. Now, losses are limited<br />

to the value of the share. Shareholders will not have to pay additional funds in to close the company should<br />

shareholder' equity become negative. Instead of holding a direct participation in shareholders' equity itself, investors<br />

now hold an asset that is akin to an option on shareholders' equity. In a traded stock market, the asset that is actually<br />

being traded is the option contract, and the underlying shareholders' equity is not now directly traded 2 .<br />

In order to model stock prices, therefore, we need to model shareholders' equity and hence the value of the<br />

option held by shareholders in the presence of limited liability. We can think of a share as a call option with strike of<br />

zero on the shareholders' equity of the firm (the strike price being zero since there is no cost associated with the<br />

decision to close the firm and walk away with the shareholders' equity). The option is perpetual (it has no fixed<br />

expiry date), and as the shareholders can exercise it at any time, it is American-style. Since the option price cannot<br />

become negative, this in turn means that share prices cannot now become negative, which is consistent with realworld<br />

experience. Corporate insolvency provisions, which in their simplest form mean that once shareholders’<br />

equity hits zero the firm ceases to trade and the shareholders receive no payoff, can be viewed as a knock-out feature<br />

at zero. The introduction of limited liability thus changes the nature of a share considerably from the perspective of<br />

the investor, from a linear contract to a non-linear option-type contract.<br />

Section 3 considers the pricing of such an option for the most simple case of an all-equity firm which pays no<br />

dividends.<br />

3 Pricing the Perpetual Call Option on Shareholders' Equity<br />

The pricing of perpetual options has been considered in both the financial options and real options literatures.<br />

Samuelson (1965) considers the pricing of warrants, including perpetual warrants, where the underlying is a stock<br />

following a geometric Brownian Motion. More recent papers extend the approach to incorporate jump-diffusion<br />

(see, for example, Gerber & Shiu, 1998). In the real options literature, perpetual call options are relevant to the<br />

valuation of options to invest (see, for example, McDonald and Siegel, 1986, and Dixit and Pindyck, 1994).<br />

The modelling of the perpetual American-style call option with strike of zero described in Section 2 requires<br />

both a stochastic variable and an associated stochastic process to be specified. Shareholders’ equity is chosen to be<br />

the stochastic variable. In the financial options literature, it is common practice to model stock prices as a geometric<br />

Brownian motion. This approach assumes that logarithmic returns are normally distributed and prices therefore<br />

lognormally distributed, in line with empirical evidence. The share price is therefore directly constrained to be<br />

2 the collusion required by individual shareholders in the decision to exercise the option and close the firm is not explicitly considered in this<br />

paper. Rather, it is assumed that shareholders will choose collectively to exercise the option only at a point where it is optimal to do so.<br />

91


positive by the distributional assumptions implicit in the stochastic process. The same approach cannot be taken<br />

with shareholders' equity as the stochastic variable, since it would then be impossible for shareholders' equity to<br />

reach zero and hence impossible for limited liability to have an impact on share valuation in the way that is proposed<br />

in this paper.<br />

The approach taken in this paper is to model shareholders' equity as a birth and death process. Birth and death<br />

processes, also known as square root processes, have been used widely in the modelling of populations (see, for<br />

example, Moran, 1958) and in finance in stochastic volatility models of asset prices following Heston (1993). Cox<br />

and Ross (1976) propose the square root process as an alternative to geometric Brownian motion for the modelling<br />

of stock prices. The continuous limit of the birth and death process for total assets is given by<br />

dA � �A dt � � Adz<br />

(1)<br />

In the case of an all-equity firm (A = E), shareholders' equity E follows the same process as total assets A:<br />

dE � �E dt � � Edz<br />

(2)<br />

In this framework, shareholders' equity can reach zero with positive probability. The birth and death process<br />

implicitly carries an absorbing barrier at zero, which in the context of the current problem is useful in that it prevents<br />

shareholders’ equity from becoming positive again once it has reached zero (i.e. it implicitly takes corporate<br />

insolvency into account). Observed share prices are positive not because the underlying stochastic variable is<br />

constrained in the sense of a geometric Brownian motion but because they reflect call option prices, which by<br />

definition can only be positive.<br />

This observation underpins a further important difference between this paper and some of the prior literature on<br />

perpetual option pricing. By the very nature of the problem, shareholders' equity (the underlying stochastic variable)<br />

is not directly tradeable. As discussed in Section 2, when buying and selling shares, investors are in fact buying and<br />

selling options on shareholders' equity. Since it is not possible to hedge perfectly the risk associated with the option<br />

in the way that is assumed by standard Black-Scholes type pricing derivations (which would assume the ability to<br />

directly trade in shareholders' equity), the differential equation governing the option price must take into account<br />

this residual risk. The option is valued in a risk-adjusted setting with the drift term adjusted to compensate the option<br />

writer for the unhedgeable risk. Letting C be the value of the perpetual American-style call option on shareholders'<br />

equity E with strike price of zero where E follows equation (2), applying Itô's Lemma 3 gives<br />

2 � 1 2 d C dC �<br />

dC � �<br />

� � E � �E<br />

dt � �<br />

dE dE �<br />

�<br />

2<br />

� 2<br />

�<br />

dC<br />

E<br />

dE<br />

dz<br />

After adjusting the drift for the unhedgeable residual risk, the differential equation to be satisfied by the option<br />

value function is<br />

1<br />

2<br />

d C<br />

dE<br />

dC<br />

��E � E � � � 0<br />

2<br />

2<br />

� E � 2 �� rC<br />

(4)<br />

dE<br />

where λ is the market price of risk, which is assumed to be constant. For the drift μ to exceed the risk-free return<br />

r by an amount which compensates exactly for the risk σE 0.5 would require μ = r + λσ/E 0.5 . The birth and death<br />

process implies reduced risk in percentage terms at higher levels of shareholders' equity, which translates into lower<br />

required values of μ to compensate for that risk, as shown in Figure 1.<br />

3 noting that<br />

� C(<br />

E)<br />

�t<br />

=0 since the option is perpetual<br />

92<br />

(3)


35%<br />

30%<br />

25%<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 1. Required μ such that μ = r+λσ/E 0.5 (r = 0.03, σ = 0.10, λ = 2)<br />

Where this holds, the solution of (4) follows the standard result that the perpetual call option will never be<br />

exercised and its value is simply its intrinsic value E at any point in time. If, on the other hand, the possibility is<br />

allowed that the rate of growth of a firm's assets need not satisfy this criteria in an incomplete market where the<br />

underlying stochastic variable (shareholders' equity) is not traded, then very different results are obtained. Firstly,<br />

the coefficient of the first derivative term in equation (4) is modified to (r+α)E where α represents the excess of μ<br />

over the required rate shown in Figure 1. This gives<br />

1<br />

2<br />

d C<br />

dE<br />

dC<br />

dE<br />

2<br />

2<br />

� E � �r � � �E � rC � 0<br />

(5)<br />

2<br />

The solutions of the equation will therefore be price curves corresponding to particular values of α. The value of<br />

α relevant to any particular firm at a particular point in time is an empirical issue to be determined separately. It is<br />

not the case that firm values can be expected to move along a single price curve. From Figure 1, it is apparent that<br />

the hurdle required (in terms of asset growth) to attain any particular value of α is much higher for a small firm than<br />

a large firm, and we might therefore expect i) small firms to have lower α on average than larger firms and ii) α to<br />

increase, other things equal, as a firm grows.<br />

Equation (5) has no known closed-form solution. Instead, a Frobenius series solution is proposed following the<br />

approach of Pinto et al (2009). The proposed solution takes the form<br />

� � � �<br />

� � an<br />

n�0<br />

n�<br />

p<br />

(6)<br />

C �<br />

where the values taken by p and the series an are determined by the particular valuation problem in the form of<br />

the differential equation (5) and appropriate boundary conditions. Shareholders' equity is replaced by the scaled<br />

variable ξ such that σ 2 ξ=E in order to ensure that the resulting Frobenius series converges to zero and hence can be<br />

easily evaluated numerically. The Pinto et al method uses the Frobenius series to solve the equation analytically at a<br />

point arbitrarily close to zero, with the solution then expanded numerically to higher values of E. From (6),<br />

differentiating as required and then adjusting the counters so as to consider a power of ξ equal to n+p throughout,<br />

dC<br />

�<br />

d�<br />

� �<br />

n��1<br />

�n � p �1�<br />

a<br />

n�1<br />

�<br />

dC<br />

n�<br />

p<br />

� �<br />

n�<br />

p<br />

� � an�<br />

(7b)<br />

d�<br />

n�0<br />

so �n � p�<br />

93<br />

(7a)


2<br />

d C<br />

2<br />

d�<br />

�<br />

� �<br />

n��2<br />

�n � p �1��n<br />

� p � 2�<br />

d<br />

C<br />

a<br />

n�2<br />

�<br />

n�<br />

p<br />

� �<br />

2<br />

n�<br />

p<br />

� a<br />

2<br />

n�1�<br />

(7d)<br />

d�<br />

n��1<br />

so � �n � p��n<br />

� p �1�<br />

Substituting equations (6) and (7) into (5),<br />

�<br />

2<br />

�<br />

�<br />

n�<br />

p<br />

n�<br />

p<br />

n�<br />

p<br />

�n p��n<br />

� p �1�a<br />

� ��r<br />

� � � �n � p�a<br />

� � r a � � 0<br />

� � �<br />

n�1<br />

n<br />

2 n��1<br />

n�0<br />

n�0<br />

�<br />

In order for equation (8) to hold, the total coefficient on each of the powers of ξ must be zero. Considering first<br />

the case n=-1, the coefficient of ξ p-1 is (from the first term only)<br />

� 1� 0 0 � � pa<br />

p (9)<br />

which has solutions p=0 and p=1. In general, the required solution will be a linear combination of the two<br />

solutions<br />

� � � �<br />

1 � � an<br />

n�0<br />

n�1<br />

, and (10a)<br />

� � � � � � � �<br />

2 � � ln � C1<br />

� � bn<br />

n�0<br />

n<br />

(10b)<br />

C �<br />

C �<br />

where the first solution C1 corresponds to the larger of the two roots of (9) and the second solution C2 is the<br />

Frobenius series corresponding to the root p=0 plus the natural logarithm of the first solution. A boundary condition<br />

C(0)=0 is now imposed (whereby the option value is required to reach zero when shareholders’ equity reaches the<br />

absorbing barrier of zero). This is satisfied by C1 but not C2, since C2(0)=b0. The required solution must therefore be<br />

a multiple of C1 alone. Substituting the solution p=1 into (8) and considering this time the coefficients of ξ n+p for the<br />

case n≥0 gives<br />

�<br />

2<br />

2<br />

a<br />

a<br />

n<br />

n�1<br />

�n 1��n<br />

� 2�a<br />

� �r � � ��n �1�a<br />

� ra � 0<br />

� n�1<br />

n n<br />

Changing the counter and simplifying,<br />

2<br />

�<br />

�r � n�r<br />

� � ��<br />

2<br />

� n�n<br />

�1�<br />

Once the initial term a0 has been found, the remaining terms in the series solution can be determined<br />

analytically by equation (12) and the option value hence obtained from equation (6). To find the value of a0<br />

corresponding to the particular solution of the valuation problem, further boundary conditions must be imposed on<br />

the solution to the differential equation. The standard "high contact" or "smooth pasting" conditions commonly<br />

applied to the valuation of American-style options (see, for example, Samuelson, 1965 and Merton, 1973), whereby<br />

the level and first derivative of the option value are matched to those of the option payoff at the optimal exercise<br />

point, cannot be applied here since the optimal exercise point is either infinity (the option will never be exercised) or<br />

zero (the option will be immediately exercised) depending on whether α is positive or negative.<br />

94<br />

�<br />

n<br />

(7c)<br />

(8)<br />

(11)<br />

(12)


Where α is positive, the pricing approach produces values of the option which are in excess of the current level<br />

of shareholders’ equity. Ingersoll (1987, p373) refers to this as the “problem of infinities”, whereby the unbounded<br />

potential payoff from future exercise combines with the low probability of exercise to produce a finite present value.<br />

Conversely, where the drift term is insufficient to compensate for risk (i.e. α is negative), the pricing method<br />

produces values which are below the current level of shareholders’ equity, in which case the optimal action by<br />

shareholders it to exercise their option to “take the money and run”.<br />

The optimal exercise point where α is positive is at infinity. The option payoff (for the unscaled variable E) has<br />

a constant slope of 1 and the solution curves obtained tend to straight lines at high levels of E. The smooth pasting<br />

condition that the option value and the first derivative of the option value match those of the payoff at infinity can<br />

therefore be approximated by setting the first derivative of the solution equal to one at the highest level of E<br />

considered. The solution curve is first calculated using a value a0=1, and then the required value of a0 in equation (6)<br />

calculated so as to equate the derivatives of the option value and the payoff at the highest level of E considered.<br />

Equation (6) is then used to calculate option values at a range of levels of shareholders' equity. Section 4 presents<br />

the qualitative results and Section 5 briefly discusses how these results might be extended to incorporate leverage<br />

and dividends.<br />

4 Results<br />

Figure 2 shows the values obtained from the pricing approach described in the previous section with parameters<br />

r = 0.03, σ = 0.1 and λ = 2, and α ranging from 0% to 3%. The x-axis value E is the value of shareholders' equity (≈<br />

book value of equity) and the y-axis value is the value of the perpetual call option (≈ market value of equity).<br />

C<br />

25<br />

20<br />

15<br />

10<br />

5<br />

0<br />

0 2 4 6 8 10<br />

E<br />

Figure 2. Market Value of the All-Equity Firm (r = 0.03, σ = 0.1, λ = 2)<br />

From Figure 2, it is seen that the market value of the all-equity firm is increasing with firm size. Where α = 0, the<br />

drift is exactly sufficient to compensate for risk at all levels of E, the option will never be exercised, and it is priced<br />

at its intrinsic value consistent with the findings of prior research pricing perpetual options using a complete markets<br />

approach. Higher values of α, i.e. a higher growth rate of assets and hence shareholders' equity, produce higher<br />

option values. Negative values of α (not shown in Figure 2) produce option values below the option's intrinsic value.<br />

Here, investors should immediately exercise the option ("taking the money and running"). Figure 2 suggests that the<br />

market values of small firms are more sensitive to changes in the value of α than those of large firms in percentage<br />

terms.<br />

It is important to note that the playing field is not level for all firms. As shown in Figure 1, small firms need to<br />

generate a much higher drift term μ than large firms in order to enjoy the same α as a result of the choice of the<br />

square root (birth and death) process. Changes in the parameters σ and λ impact upon the required drift rate μ a firm<br />

must generate in order to achieve a given level of α and hence plot on a particular curve in Figure 2.<br />

95<br />

α=0%<br />

α=1%<br />

α=2%<br />

α=3%


Considering first the volatility parameter σ, Figure 3 shows how the drift μ corresponding to α=0 (i.e. the<br />

minimum drift for the option to have value in excess of its intrinsic value) changes with σ. The solid black line<br />

corresponds to the same parameters as the solid black line in Figure 1 (r = 0.03, σ = 0.1 and λ = 2). The impact of<br />

changes in σ on required drift and hence α is more pronounced for small firms (represented by low E for the allequity<br />

firm) than large firms (high E). At high levels of σ, it becomes increasingly difficult for small firms to<br />

achieve the required drift to compensate for risk, and hence increasingly difficult for the option represented by<br />

traded equity to price at above intrinsic value.<br />

35%<br />

30%<br />

25%<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 3. Required μ such that μ = r+λσ/E 0.5 (r = 0.03, λ = 2)<br />

Figure 4 goes on to consider the impact of the market price of risk λ on required values of μ and hence option<br />

values. The market price of risk is positively related to required drift, hence increases in λ will, ceteris paribus,<br />

reduce the level of α achieved by individual firms and hence reduce option values. The impact of changes in the<br />

market price of risk on required μ and hence α is more equal across firm size than that of changes in σ.<br />

35%<br />

30%<br />

25%<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 4. Required μ such that μ = r+λσ/E 0.5 (r = 0.03, σ = 0.10)<br />

Figures 3 and 4 show how the levels of asset growth required to achieve a given excess growth rate α vary with σ<br />

and λ. At high levels of volatility and market price of risk, firms (an in particular small firms) can be expected,<br />

ceteris paribus, to achieve low levels of α and hence market values of equity close to book value (low market to<br />

book ratios). When risk and the market price of risk are low, any given rate of asset growth μ corresponds to a<br />

higher excess return α and hence higher option value (higher market to book ratios). Figure 5 shows the market to<br />

book values corresponding to the option values in Figure 2.<br />

96<br />

σ=0.05<br />

σ=0.10<br />

σ=0.15<br />

λ=1.5<br />

λ=2.0<br />

λ=2.5


M<br />

T<br />

B<br />

V<br />

20<br />

18<br />

16<br />

14<br />

12<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

0 1 2 3 4 5 6 7 8 9 10<br />

E<br />

Figure 5. Market to Book Values (r = 0.03, σ = 0.10, λ=2)<br />

Figure 5 suggests that small firms will trade at higher book to market values than large firms to the extent that they<br />

are able to generate the same rate of excess asset growth above that required to compensate for risk. Considering the<br />

spread of values in Figure 5, one would expect to see a much higher degree of variation in market to book values<br />

across small firms than large firms. These results are also relevant to the consideration of well-known anomalies in<br />

stock market returns including size effects (Banz, 1981), value effects (Fama and French, 1992), the equity premium<br />

puzzle (Mehra and Prescott) and the equity volatility puzzle (Shiller, 1981) and provide the basis on which specific<br />

testable hypotheses relating to these observed anomalies can be derived.<br />

5 Leverage and Dividends<br />

Sections 3 and 4 considered the value of an all-equity firm which pays no dividends when the asset held by<br />

shareholders is valued as a call option on shareholders' equity. In section 3, a mixed analytic-numerical method was<br />

used to value the perpetual American-style option with strike zero where the underlying stochastic variable follows a<br />

birth and death process. This section considers how the approach might be extended to take into account leverage<br />

and dividends.<br />

5.1 Leverage<br />

In the stylized leveraged firm, total assets are financed by a combination of debt and shareholders' equity. Assuming<br />

that the notional outstanding amount of debt is fixed, shareholders' equity follows the stochastic process<br />

� � D ��E dt � � D Edz<br />

dE � 1 1 �<br />

(13)<br />

E<br />

E<br />

Figure 5 shows the relationship between required μ and E for three different levels of D/E. As previously, the<br />

solid black line corresponds to an all-equity firm (D/E=0) with r = 0.03, σ = 0.1 and λ = 2. Increases in leverage<br />

(D/E) reduce the hurdle rate of required μ for all firms, with the effect similar across firm sizes and reducing<br />

marginal benefits from increased leverage. Ceteris paribus, increasing leverage increases α and hence option<br />

(market) value. The marginal impact diminishes as leverage increases, and given that the values of small firms<br />

appear to be more sensitive to changes in α than those of large firms (Figure 2), the same percentage changes in<br />

leverage can be expected to have greater percentage impact on the values of small firms than large firms.<br />

97<br />

α=0%<br />

α=1%<br />

α=2%<br />

α=3%


35%<br />

30%<br />

25%<br />

20%<br />

15%<br />

10%<br />

5%<br />

0%<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 5. Required μ such that μ = r+λσ/E 0.5 (r = 0.03, σ = 0.10, λ = 2)<br />

D/E=0<br />

D/E=0.5<br />

Incorporating limited liability to shares therefore introduces implications for the impact of leverage on firm value<br />

beyond the standard "debt shield" view of capital structure.<br />

5.2 Dividends<br />

Dividends essentially represent a partial distribution of shareholders' equity to shareholders. Shareholders (option<br />

holders) receive these dividends whilst the option remains unexercised. The usual approach to incorporating<br />

dividends into option pricing models is to adjust the drift of the underlying, but this approach is not appropriate here<br />

as it implies that the underlying asset (shareholders' equity) is separately traded and pays dividends to its holders,<br />

rather than to the holders of the option contract. In the current context, dividends can be viewed as a partial exercise<br />

of the option, with a proportion of shareholders' equity distributed to shareholders (option holders) and the market<br />

value of equity (the residual unexercised option value) reduced by an amount which depends on the size of the firm,<br />

the amount of the dividend (the amount exercised) and the excess asset growth rate α (Figure 2).<br />

Incorporating limited liability to shares therefore introduces implications for the adjustment of share values in<br />

response to dividend announcements which differ from standard taxation-based arguments.<br />

6 Summary<br />

This paper explicitly considers the impact of limited liability on the valuation of shares. A share (market value) can<br />

be viewed as a perpetual American-style call option with strike zero on shareholders' equity (book value). The<br />

qualitiative results presented in Section 4 and the extensions discussed in Section 5 present a number of areas in<br />

which the normative approach taken in this paper provides a foundation for the derivation of testable empirical<br />

hypotheses.<br />

7 Acknowledgements<br />

I would like to thank Dr Helena Pinto for providing the Mathematica code used in Pinto et al (2009) and which<br />

forms the basis of the approach used in solving the particular valuation problem considered in this paper. Any errors<br />

remain my responsibility.<br />

98<br />

D/E=1


8 References<br />

Banz, R. (1981). The Relationship between Return and Market Value of Common Stock. Journal of Financial<br />

Economics, Vol. 9, 3-18.<br />

Basu, S. (1977). Investment Performance of Common Stock in Relation to their Price Earnings Ratios: A test of the<br />

Efficient Market Hypothesis. Journal of Finance, Vol 32(2), 663-678.<br />

Cox, J.C. & Ross, S.A. (1976). The valuation of options for alternative stochastic processes. Journal of Financial<br />

Economics, Vol. 3, 145-166.<br />

Dixit, A.K., & Pindyck, R.S. (1994). Investment Under Uncertainty. Princeton University Press.<br />

Fama, E.F. and K.R. French (1992). The Cross-Section of Expected Stock Returns. Journal of Finance, Vol. 47(2),<br />

427-265.<br />

Gerber, H.U., & E.S. Shiu (1998) Pricing Perpetual Options for Jump Processes, North American Actuarial Journal,<br />

Vol. 2 (3),101-112.<br />

Gordon, M.J. (1959). Dividends, Earnings and Stock Prices. Review of Economics and Statistics, Vol. 41 (2), 99–<br />

105.<br />

Heston, S.L. (1993). A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and<br />

Currency Options. The Review of Financial Studies, Vol. 6 (2), 327–343.<br />

Hunt, B.C. (1936). The Development of the Business Corporation in England, 1800–1867. Harvard University<br />

Press.<br />

Ingersoll, J.E. (1987). Theory of Financial Decision-Making. Rowman & Littlefield Publishers, Inc.<br />

McDonald, R., Siegel, D. (1986). The value of waiting to invest. Quarterly Journal of Economics, Vol. 101, 707-<br />

728.<br />

Mehra, R. and E.C. Prescott (1985). The Equity Premium: A Puzzle, Journal of Monetary Economics, Vol. 15, 145-<br />

161.<br />

Merton, R.C. (1973). Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4,<br />

141-183.<br />

Merton, R.C. (1974). On the pricing of corporate debt: the risk structure of interest rates. Journal of Finance, Vol.<br />

29, 449-470.<br />

Miller, M. & F. Modigliani (1961). Dividend policy, growth, and the valuation of shares. Journal of Business, Vol.<br />

34, 411-433.<br />

Moran, P.A.P. (1958). Random processes in genetics, Proceedings of the Cambridge Philosophical Society, Vol.54,<br />

60–71.<br />

Samuelson, P.A. (1965). Rational Theory of Warrant Pricing. Industrial Management Review, 6(2), 13-32.<br />

Shiller, R.J. (1981). Do stock prices move too much to be justified by subsequent changes in dividends? American<br />

Economic Review, Vol 71, 421-436<br />

99


PRICE PRESSURE RISK FACTOR IN CONVERTIBLE BONDS<br />

Nikolay Ryabkov, Swiss Finance Institute, University of Zurich, Switzerland<br />

Galyna Petrenko, Universidad Carlos III de Madrid, Spain<br />

Email: nkryabkov@gmail.com<br />

Abstract This paper sheds the light on the previously unnoticed effect of hedge funds on the asset pricing of their investment<br />

instruments. Demand fluctuations from the convertible arbitrage hedge funds generate a price pressure risk on convertible bonds<br />

pricing. This risk is materialized into forced selling risk factor during market turbulence and liquidity. We aim at constructing the<br />

price pressure risk factor based on the sensitivity of individual convertible bonds to the demand of hedge funds through long/short<br />

factor mimicking portfolio and showing its impact on the convertible pricing and on a risk models for a convertible bond portfolio. In<br />

construction of the factor we rely on information about the total assets under management of convertible arbitrage hedge funds. We<br />

test the significance of new factor in the multi-factor risk modeling framework for a convertible bond portfolio and as a missing risk<br />

factor for pricing models of convertible arbitrage hedge funds.<br />

Keywords: convertible bonds, hedge funds, asset pricing<br />

JEL classification: G20<br />

1 Introduction<br />

Convertible bond is a fixed income security that gives their holders an option to convert it into a predetermined<br />

number of shares of an issuing firm. Traditionally, the convertibles were used for the outright investment through<br />

the bottom-up approach, however since 90s along with growth of hedge funds, the focus of investment in<br />

convertibles had been shifting towards arbitrage. Plain vanilla convertible arbitrage consists of long position in<br />

convertible bond and short in the underlying stock. This strategy requires high leverage to obtain sound performance<br />

results in spite of seemingly rich arbitrage opportunities due to potential mispricing of convertible bonds. Therefore<br />

convertible arbitrage hedge funds might be quite vulnerable during liquidity crisis.<br />

Over the past twelve years the market for convertible bonds experienced periods of significant troughs. First,<br />

convertible bonds negatively reacted to the 1998 crisis after the LTCM collapse. Second example is a 2005 period<br />

when convertible arbitrage hedge funds faced large redemptions from investors after poor past performance. Finally<br />

the 2008 credit crunch crisis brought another drop in prices of convertibles in October 2008. In spite of different<br />

reasons of those tail events, there is one common factor, namely significant price pressure on the convertible bonds<br />

valuation from the demand side of hedge funds. The convertible bonds remain a niche product and the recent growth<br />

in trading volumes and new issues were primarily driven by the demand of hedge funds. When a hedge fund<br />

requires liquidity for any exogenous reason, it is forced to sell convertible bonds positions driving the convertibles<br />

prices down and distracting the fundamental valuation of convertible bonds. The fluctuations in the hedge funds<br />

demand creates a price pressure on the convertible bonds which should constitute a "forced selling" risk factor in the<br />

convertible pricing models. Moreover, price pressure hypothesis would contribute to the explanation of systematic<br />

convertible bonds underpricing in so that they carry a premium for the unexpected demand fluctuations.<br />

Hedge funds are often called a liquidity provider and thus the price pressure risk factor is essentially the<br />

liquidity risk factor. Therefore it should have the same features as other liquidity factors. Assets with high liquidity<br />

sensitivity should be traded at a price premium relative to the securities with low sensitivity. The liquidity breadth<br />

increases during market downturn. A high correlation in selling activity of hedge funds can lead to large trade<br />

imbalance and can lower overall liquidity. The excess sell imbalance on the illiquid securities such as convertible<br />

bond, is growing up. The financial crisis has more pronounced effect on the liquidity of small, volatility and high exante<br />

liquidity beta stocks. Hedge funds choose to sell less liquidity-sensitive (low liquidity beta) securities because<br />

selling illiquid securities during a crisis is very expensive. Therefore convertible arbitrage hedge funds typically<br />

carry a liquidity risk premium through the mismatch of the liquidity of long and short positions which cannot be<br />

hedged.<br />

100


On the other hand, increase in risk aversion induces flight to quality that require increasing overall liquidity of<br />

portfolio holding and thus portfolio flees to securities with higher liquidity during recession. Moreover higher risk<br />

aversion can increase selling interest and decrease buying interest in convertibles during a market crash and/or<br />

recessions. Hedge funds sell securities whose liquidity is less sensitive to market decline and they buy securities<br />

with greater liquidity (flight to quality effect). On the top of the price pressure liquidity factor generated by hedge<br />

funds, converts as well as corporate bonds are usually more illiquid than equities. This fact magnifies the impact of<br />

liquidity concerns on the convertible bond pricing. To sum up, price pressure risk factor as well ex-ante liquidity<br />

risk factor of convertible securities appears as a risk factor for an investor fund holding long position in convertibles<br />

both mutual funds and arbitrage hedge funds alike. The convertibles holding carries the risk premium for price<br />

pressure factor which explodes during market crashes and may or may not have been mitigated by flight-to-quality<br />

effect during longer periods of recessions.<br />

The objective of the paper is to test the hypothesis of price pressure of hedge funds on the convertible bonds<br />

market and identify the price pressure as distinct risk factor in a convertible bond pricing model. It is important to<br />

control for another risk factors when assessing the impact of hedge funds on convertible bonds. Therefore we first<br />

develop the multi-factor risk model for convertible bond embedding the equity risk, bond-type risk factors and<br />

volatility risk factors to capture the option premium. The purpose of the multi-factor model is to filter out other<br />

potential risk factors that influence the convertible bonds pricing. We keep the model as parsimonious as possible<br />

because it is only a benchmark model for our study. We call it conventional risk factor model and estimate it for the<br />

individual convertible bonds, constituents of the UBS Global Convertible Bond Index in the period from October<br />

2002 to September 2009 on monthly data. Second, we measure the hedge fund demand for convertible bonds based<br />

on the estimated assets under management of the convertible arbitrage hedge funds taken from the TASS Lipper<br />

hedge funds database. We understand that hedge funds are not the only investor in convertible bonds. There are<br />

mutual funds or institutional flows however their demand would be more stable in comparison with hedge funds as<br />

there is no evidence on the growing assets base of other convertible bond investors. We focus on convertible<br />

arbitrage hedge funds because their primary investment strategy involves long position in convertible bonds. We do<br />

not exclude possibilities that other hedge funds may invest in converts but there is no way to know it because<br />

holding information of hedge funds is unavailable. Therefore we assume that most of demand was created by<br />

convertible arbitrage hedge funds. Any change in the demand should be caused by change in the assets under<br />

management and thus assets size reflects the demand dynamics. We normalize the total assets of convertible<br />

arbitrage hedge funds by total convertible bond market capitalization to construct the excess demand value.<br />

The methodology of construction and testing the new risk factor on convertible bond pricing is the following.<br />

First we establish Granger causality relationship between hedge fund demand and convertible bond index returns.<br />

There are two approaches: first directly testing for Granger causality between convertible bond index return and<br />

hedge fund demand; second testing for Granger causality between residuals from estimated conventional convertible<br />

bond multi-factor model and hedge fund demand. The second approach adds value as it clears out other potential<br />

systematic risk factors influencing the convertible bond pricing. Moreover, as we expect difference between<br />

financial crisis and normal time as well as between recession and expansion periods, we can control by adding time<br />

dummy variables directly in the factor risk model regressions and thus amending the residuals. We provide three<br />

types of output: for model without dummy variables, with dummy on the NBER recession period from January 2007<br />

through June 2009; and with dummy on "post-Lehman" period from August 2008 through March 2009. Moreover<br />

dummy variables would mitigate effect of structural break in convertible bond regressions. Second we estimate<br />

sensitivities of individual convertible bonds and aggregated index towards hedge fund demand by running the<br />

regression analysis controlling for other risk factors. The null hypothesis of the price pressure existence is significant<br />

negative coefficient on the hedge fund demand measure. Convertibles are undervalued during recessions and<br />

therefore we expect that total effect of recession will have positive effect on the expected bonds returns. Third we<br />

construct the risk factor based on the sensitivity of convertible bonds estimated by 12- and 24-months rolling<br />

regressions. We rank universe of convertible bonds by its sensitivity. The risk factor is a out-of-sample return of<br />

long/short portfolio of convertible bonds where we take long position in the convertible bonds from top decile and<br />

short position from low decile. The cumulative and monthly risk-adjusted return should produce out-performance<br />

results in the backtesting relative to the index as in this way we would hedge against the demand stochastic<br />

fluctuations. Finally we try to estimate the risk factor premium following Fama-Macbeth procedure.<br />

The structure of the paper is the following Section 2 discusses relevant existing literature on the convertible<br />

bonds. Section 3 describes data which are used for our empirical analysis. Section 4 discusses empirical<br />

101


methodology and states the testable hypotheses. Section 5 shows empirical results of Granger causality tests<br />

between hedge fund demand and convertible bonds returns. Section 6 summarizes our results .<br />

2 Literature Review<br />

Our paper aims at contributing to the academic research literature devoted to convertible bonds which is<br />

relatively disperse in terms of topics and applications. One of the most popular themes in convertible bond research<br />

is the existence of perceived mispricing anomaly in the convertible bond valuation. Our paper contributes to this<br />

stream of literature by creating a new priced risk factor as a state variable in the pricing modeling of convertible<br />

bonds and argues that ignoring this risk factor would add to mispricing differential. The fact of systematic<br />

mispricing of convertibles has been well documented in the existing academic literature. A good survey of possible<br />

explanations is provided in the paper by M. Ammann et al. (JBF 2003). For example, Calamos (2003) advocated the<br />

underpricing anomality to the underestimation of the stock volatility. On the other hand, Lhabitant (2002) argued<br />

that mispricing is driven by the complexity of convertible bonds valuation. The paper by Agarwal et al. (2007)<br />

suggests that convertible bonds are mispriced due to illiquidity, small size issue, and again complexities of pricing.<br />

Special attention in the academic literature was given to the performance of new issues, as convertibles are<br />

more likely to be mispriced at issue than other asset classes due to its complexity. Both short-term and long-term<br />

performance of new issues was of interest by researchers and practitioners. New issuance performance was studied<br />

in the paper by Henderson (2005) who reported that convertibles are underpriced at issuance and found consequent<br />

positive excess risk-adjusted returns. Association of increase in short selling of equity underlying near the<br />

convertible bond issuance time with liquidity and efficiency was discussed in the paper by Choi et al (2005).<br />

Mitchell et al. (2007) are first to mention the existence of forced selling risk in the convertible bonds valuation<br />

by comparing mispricing of convertible bonds (relative to theoretical model) to returns of hedge funds with large<br />

holdings of convertible bonds. They also conducted an analysis on the merger arbitrageurs activity and merger<br />

targets during market crashes. This is the first step in estimating the hedge fund demand risk for convertible bonds<br />

which leads to forced selling risk at market turning points. However relying on theoretical model might be<br />

misleading due to the complexity of convertible valuation and possible model misspecification risk. Therefore we<br />

prefer relying on the empirical estimation of sensitivities of convertible bonds towards hedge funds demand in<br />

deriving the asset pricing implications.<br />

Risk and return characteristics of convertible arbitrage hedge funds were empirically studied by Agarwal et al.<br />

(2007). The authors constructed the following factor portfolios mimicking convertible arbitrage strategy: positive<br />

carry, volatility arbitrage and credit arbitrage. They explained abnormal returns of convertible arbitrage hedge funds<br />

as a liquidity premium. In addition, they constructed the supply demand misbalance factor which found to be<br />

significant in explaining the cross-section of convertible arbitrage hedge funds returns. In our paper, the price<br />

pressure factor is also based on the demand/supply misbalance however our objective is to show that this is the<br />

important factor first and foremost for the convertible bond pricing and only as a consequence a convertible bond<br />

investor will pay a price premium. This is the channel how the factor appears to be significant in risk models for a<br />

fund holding long position in convertible bonds.<br />

As price pressure factor can be thought as a liquidity factor, many papers on the liquidity measure and how liquidity<br />

is important in portfolios risk models and funds' performance measurement models are relevant for our purposes. In<br />

terms of liquidity risk factor, paper by Brunnermeier (2009) predicts that investors choose not to sell illiquid assets<br />

and instead choose to sell liquid assets . The paper by Acharya & Pedersen (2005) predicts that investors are<br />

unlikely to sell high liquidity beta assets and instead sells other assets in downturn. We also observe this behavior<br />

while liquidity beta is substituted by sensitivity of convertible bonds to hedge fund demand and study how effect of<br />

price pressure varies during financial crisis and recession periods.<br />

Finally, the only study of convertible bond mutual funds, to the best of our knowledge, was conducted by Amman et<br />

al. (2007) who constructed a set of factors driving the convertible bond fund performance. It includes stock factors,<br />

bond factors, option factors and fund specific factors. The authors addressed the problem of time-varying betas in<br />

their multi-factor regression models by employing rolling regressions and models with latent variables using<br />

Kalman filtering. This type of models constitutes the performance measurement methodology for the convertible<br />

102


ond fund. We use the setup of multi-factor models for convertible bonds in the most parsimonious case to create<br />

control variables for alternative risk sources.<br />

3 Data<br />

The sample of convertible bonds represents constituents of the UBS Global Convertible Index (UCBIGLBL Index).<br />

The data are at monthly frequency from October 2002 to September 2009. It is a broad-based index representing the<br />

convertible market and is often used as a benchmark for global convertible funds. It is market capitalization<br />

weighted total return index which was launched on 30 September 1998. However historical data for all its<br />

constituents are available from 30 September 2002 onwards. The constituents of the index contain equity-linked<br />

convertible bonds issues such as convertibles, exchangeables, mandatory issues, bonds with warrants and similar<br />

products which must be convertible into a listed share. The index is devised by UBS and independently maintained<br />

by MACE advisers which is supplier of convertible analysis tools and data.<br />

Another empirical source comes from demand side represented by convertible arbitrage hedge funds. The data<br />

source of the convertible arbitrage hedge funds is TASS Lipper hedge funds database. We combine both live and<br />

graveyard hedge funds into our sample in order to reduce survivorship bias. Moreover we filtered hedge funds<br />

selecting only those that reports in USD at monthly frequency. Convertible arbitrage is primary investment strategy<br />

for such hedge funds. There is a total of 84 convertible arbitrage funds in the sample. For the aggregate<br />

computations, we take official CSFB/Tremont Convertible Arbitrage hedge funds index.<br />

For additional factors, we use various data sources. The equity factor is the market return in excess of risk free<br />

rate where that the market includes all NYSE, AMEX, and NASDAQ stocks and risk free rate which is 1-month Tbill<br />

rate. The data are from the K. French data library\footnote{Data are publicly available and regularly updated on<br />

the following web site. Risk free rate is originally from Ibbotson Associates. Furthermore, the data on the term<br />

spread and default spread are collected from the FRED database of the Federal Reserve Bank of St. Louis. We take<br />

Moody's Seasoned Aaa and Baa Corporate Bond Yield, 1-Year GS1 and 10-year GS10 Treasury Constant Maturity<br />

Rate. The credit risk or the default spread is measured by the difference between Baa and Aaa yields. The interest<br />

rate risk or term spread is a difference between GS10 and GS1. The term spread captures variation in slope of the<br />

yield curve. Variations in level of yield curve is captured by risk-free rate or short term T-bill yield approximated<br />

by 3-months treasury TB3MS. Finally, month-end closing prices for the CBOE Volatility Index VIX index is<br />

downloaded directly from the Chicago Board Option Exchange web page (www.cboe.com) and used as a measure<br />

of the volatility factor. Finally, the trend-following hedge funds factors as in the paper Fung & Hsieh (2004) are<br />

available from the D. Hsieh's data library.<br />

For robustness check section, we will look at the short interest data of equity underlying. Unfortunately we are<br />

not able to collect the short interest data for each individual stocks underlying the convertible bond issue. Therefore<br />

we construct approximate measure on the short interest index available in Bloomberg. We consider the New York<br />

Stock Exchange US Short Interest Mid-Month Index ( NYSINYSE Index ). The index represents the short interest<br />

data as reported around the 15th of the month for the current month and is later revised around the 6th of the next<br />

month. This index is the mid month short interest value for the New York Stock exchange, reported between the<br />

18th and 25th of the month. We consider level and monthly percentage growth of the index as a potential variable<br />

of interest in our study.<br />

4 Testable hypotheses and methodology<br />

In this section, we present the empirical methodology of the paper step by step. First we assess the impact of hedge<br />

funds demand on the convertible bond expected return and formulate testable hypotheses. Second we estimate<br />

sensitivities of convertible bonds towards hedge funds demand and construct portfolios of deciles based on the<br />

sensitivities ranking. Thirdly, we create the risk factor as long/short portfolio of convertible bonds from extreme<br />

deciles and test its importance for the pricing implications. Finally, we test this risk factor in the framework of risk<br />

modeling for a fund that invests in convertible bonds.<br />

103


First, we construct the multi-factor pricing model for convertible bonds in order to identify the common<br />

sources of return and risk. The general form of multi-factor model for each security:<br />

M<br />

RCB<br />

� � �i<br />

Fi<br />

i�1<br />

where βi are factors loadings and Fi are the factors returns, and RCB is convertible bond return in excess of risk-free<br />

rate. As a hybrid security, a convertible bond has variety of risk exposures coming from the equity risk, bond<br />

structure and embedded option. Our goal is to keep the multi-factor model as parsimonious as possible and at the<br />

same time to control for main systematic risk exposures. The equity risk factor is the stock market return in excess<br />

of risk free rate. There are two main bond risk factors such as factors capturing credit risk (default spread between<br />

Corporate bond yields) and interest rate risk (term spread between long-term and short-term treasury rates).<br />

Moreover, option factor is measured by volatility risk captured by growth in first difference of CBOE VIX index:<br />

Rt � � � �1MKTt<br />

� � 2TERM<br />

� � 3DEF<br />

� � 4 fdVIX � ut<br />

Three types of regression estimations are conducted: without dummy variable; with dummy variable on<br />

recession period from January 2007 to June 2009; with dummy variable on post-Lehman period from August 2008<br />

to March 2009. The OLS regression method is applied to the regressions for the monthly convertible index returns.<br />

In addition, we run the analysis for individual convertible bonds using the fixed-effect type of panel data estimation<br />

method.<br />

Second, we construct the hedge funds demand measure as a ratio of total assets under management of<br />

convertible arbitrage hedge funds normalized by the market capitalization of convertible bonds index. Price pressure<br />

is primarily generated by hedge funds who invest in convertibles. We assume constant demand from other<br />

convertible market participants. This fact can be implicitly confirmed by looking at the growth in assets under<br />

management for convertible arbitrage hedge funds from our hedge funds sample versus market capitalization of<br />

convertible bonds. We further assume that convertible arbitrage hedge funds are main participants in trading of<br />

convertible bonds. Total assets under management are proportional to the hedge fund demand. Normalization to<br />

market capitalization of convertible bonds is required to insure consistency of the demand with total convertibles<br />

issues. Normalization by convertible bond index is also related to supply/demand misbalance factor for convertible<br />

arbitrage hedge funds used in the paper by Agarwal & Naik (2007). We would like to show that this is not the<br />

misbalance between supply and demand but rather exogenous shocks influencing the demand equation and not the<br />

supply. The drawback of total assets under management as a demand measure is that it excludes leverage<br />

considerations. However the TASS database does not have monthly leverage indicator for the hedge fund. Leverage<br />

is only constant fund characteristic and therefore we cannot use it properly to track the dynamics of this variable.<br />

For robustness check, we can add constant leverage ratio to AUM measure (average leverage as defined in the TASS<br />

database) to enhance the demand level.<br />

Furthermore, we analyze the causality relationship between convertible bond returns and hedge fund demand.<br />

The Granger-test is conducted for both returns of convertible bonds index and for residuals from the conventional<br />

multi-factor model. On the one hand, the reason for the causality testing is to avoid potential endogeneity bias in all<br />

consecutive regressions. On the other hand, the causality relationship is not trivial and impossible to detect without<br />

formal testing procedure. Under the null hypothesis, hedge fund demand Granger-causes convertible bond returns<br />

such that convertible arbitrage hedge funds is the driving force in the convertible bonds market. We expect positive<br />

sign of the causality. However the magnitude of causality might differ across bull and bear period of market or<br />

during financial crisis periods and even sign might change during market turning points. We assume that this is<br />

likely to happen either during recession or during financial crisis. Therefore we test the hypothesis including two<br />

dummies on 2007-2009 recession and also for post-Lehman months to see how convertible reacted to those market<br />

events. Alternative hypothesis is based on the fact that assets are chasing returns. Growth in hedge funds demand is<br />

explained by attractive investment opportunities in convertible universe and even arbitrage hedge funds earns most<br />

of their performance on the long positions (in our on convertibles rather than on shorting stock underlying).<br />

Therefore convertible bond return should Granger-cause hedge fund demand. Demand is lower whenever historical<br />

returns of convertibles are lower.<br />

Null Hypothesis 1: Convertible arbitrage hedge funds demand Granger-causes convertible bonds return.<br />

Alternative Hypothesis 1: Convertible bonds return Granger-causes convertible arbitrage Hedge funds demand.<br />

104


Thirdly, we estimate sensitivities of individual convertible bonds towards hedge fund demand using<br />

conventional factors as control variables. We expect on average negative values of estimated sensitivities<br />

highlighting the price pressure effect from hedge funds.<br />

Null Hypothesis 2: Hedge funds demand negatively and statistically significant affect monthly returns of convertible<br />

bonds.<br />

Alternative Hypothesis 2: There is no statistically significant relationship between hedge fund demand convertible<br />

bond monthly returns.<br />

If it is determined that demand side has negative impact on the expected convertible bond returns, an investor<br />

in convertible bond should hedge against demand fluctuation in his optimal convertible bond portfolio. Sensitivity<br />

is a liquidity indicator for which investor is expected to pay a premium. Whether it is priced or not is the goal of the<br />

next step in our empirical study. We estimate the rolling regression for sensitivities on a fixed-size sample window<br />

of 24 months with 1 month rebalancing in order to construct the time-series of sensitivities of convertible bonds. We<br />

rank the universe of convertible bonds into deciles according to values of sensitivities. Afterwards, we form the<br />

long/short portfolio by buying bonds with high sensitivity (top decile) and selling those with low sensitivity (low<br />

decile). We compute the out-of-sample returns of this portfolio which is effectively the factor mimicking portfolio.<br />

We further test whether this is priced risk factor for convertible bond universe estimating the risk premium. We<br />

conduct the regression analysis using recession and post-Lehman dummy variables to track the impact of financial<br />

crisis on the risk premium. During crisis, price pressure factor becomes forced selling risk factor and the factor<br />

return differential is more pronounced during crisis.<br />

Null Hypothesis 3: Price Pressure risk factor is priced risk factor (significant risk premium) for a convertible bonds<br />

universe.<br />

Alternative Hypothesis 3: There is no statistically significant risk premium between hedge fund demand risk factor<br />

and a convertible bond fund.<br />

If the price pressure is a priced risk factor for convertible bond, it should appear as a portion of risk modeling<br />

for any convertible bond investor including convertible arbitrage hedge funds. Absent of pure convertible bond<br />

mutual fund database, we test whether the factor appears to be significant for convertible arbitrage hedge funds<br />

however we admit this testing procedure should be better applied for long only convertible bond mutual funds as<br />

they are investors who can perceive the hedge funds price pressure as a systematic risk factor. Adding new<br />

systematic risk factor can also contribute to improvement of multi-factor performance measurement model for a<br />

convertible bond fund. We formulate the following hypotheses:<br />

Null Hypothesis 4: A convertible bond fund has significant risk exposure to price pressure risk factor (betas) and<br />

has lower alpha when accounting for missing risk factor.<br />

Alternative Hypothesis 4: There is no statistically significant relationship between convertible bond fund returns and<br />

price pressure risk factor.<br />

In order to evaluate the performance of the convertible arbitrage hedge fund, we add the price pressure risk<br />

factor into existing benchmark factor model for a . We run the analysis across different types of benchmark model.<br />

First we use one-factor model where the only systematic risk factor is the market which is the Credit Suisse/Tremont<br />

convertible arbitrage hedge fund index monthly return in this case. Second, we apply the six-factor model by<br />

Hasanhodzic & Lo (2004) where factors correspond to different asset classes. Namely, the following factors are<br />

employed: S&P 500 total return, the USD Index return, the Goldman Sachs Commodity Index (GSCI) total return,<br />

the Moody's Corporate Aaa Bond Index return, the spread between the moody US Aggregate Long Credit BAA<br />

Bond Index and the Lehman Treasury Long Index (default spread), and the first-difference of the CBOE Volatility<br />

Index (VIX). Finally, we use original eight-factor risk model as in the paper by Fung & Hsieh (2004) that capture<br />

the risk of well-diversified hedge fund portfolios. The first three factors are trend-following risk factors which were<br />

constructed in the paper by Fung & Hsieh (2004): PTFSBD - Bond Trend-Following Factor, Return of PTFS Bond<br />

lookback straddle; PTFSFX - Currency Trend-Following Factor, Return of PTFS Currency Lookback straddle; and<br />

PTFSCOM - Commodity Trend-Following Factor, Return of PTFS Commodity Lookback Straddle. The next two<br />

factors are equity-oriented risk factors: S&P500 - Equity Market Factor: the SP500 index monthly total return; and<br />

Size - The Size Spread Factor: Russell 2000 index monthly total return minus SP500 monthly total return. The last<br />

two factors are bond-oriented risk factors: Bond - The Bond Market Factor: the monthly change in the 10-year<br />

105


treasury constant maturity yield (month end-to-month end), available at the St.Loius FED Economic Data database<br />

(FRED); and Credit - The Credit Spread Factor, the monthly change in the Moody's Baa yield less 10-year treasury<br />

constant maturity yield (month end-to-month end), available at the St.Loius FED Economic Data database (FRED).<br />

5 Empirical Results<br />

5.1 Conventional Factor Model<br />

Table 1 reports time-series estimation of multi-factor model for the convertible bond index returns. OLS coefficients<br />

are displayed in the table along with their t-statistics. Three types of regression estimations are conducted: without<br />

dummy variable (Model I); with dummy variable on recession period from January 2007 to June 2009 (Model II);<br />

with dummy variable on post-Lehman period from August 2008 to March 2009 (Model III). First column shows that<br />

all factors except term spread are significant. Convertible bond index has high equity market beta of 0.91 which can<br />

be explained by high average delta of convertibles. Default risk has negative effect on expected bond returns.<br />

Moreover, as with any option, high volatility leads to higher expected return of the option embedded in convertible<br />

structure. Adjusted R-squared coefficient of the conventional model is about 0.60. Adding the recession dummy<br />

does not change dramatically coefficients but the dummy variable is statistically significant with positive estimated<br />

coefficient value. That means that convertible bonds have positive expected return and investment attractiveness<br />

during recession which can be explained by downside protection because of the bond floor. This is consistent with<br />

idea that convertible bonds are providing balanced investment opportunities hedging downside risk and still bearing<br />

equity upside. Risk aversion of investors is rising during recession and they are likely to switch to bond-type<br />

investment. Recession dummy even increased the adjusted R-squared coefficient. However post-Lehman dummy<br />

variable, though being defined as in the middle of recession period, has significantly negative coefficient. We<br />

explain this negative impact by the forced-selling risk of convertible that indeed happened in October 2008 and<br />

partially in March 2009. This was the period of liquidity shortage and investor panicking. Back that time<br />

convertibles were sold off massively which had strong impact on the convertible bond returns.<br />

Model I Model II Model III<br />

const 4,85 6,85 1,35<br />

(4,95) (6,72) (1,46)<br />

emkt 0,91 1,03 0,39<br />

(7,29) (8,77) (3,11)<br />

term -0,15 -0,17 -0,24<br />

(-0,44) (-0,52) (-0,87)<br />

def -3,48 -6,29 0,73<br />

(-4,72) (-6,54) (0,87)<br />

vix gr 7,51 8,50 3,96<br />

(2,19) (2,71) (1,44)<br />

Recession<br />

NaN<br />

5,83<br />

(4,08)<br />

NaN<br />

Post-Lehmann<br />

NaN NaN<br />

-16,51<br />

(-6,96)<br />

Adj R2 0,59 0,66 0,75<br />

Table 1: Conventional Model for a convertible bond index. OLS estimates.<br />

Table 2 reports similar regression analysis for individual bonds by applying the fixed-effect panel data<br />

estimation technique. For individual bonds, VIX factor is insignificant and on the other hand term spread happened<br />

to be statistically significant. The equity market beta has lower value of about 0.65. The dummy variables are<br />

statistically significant however signs of estimated coefficients are opposite. During recession, expected returns on<br />

individual bonds are negative. This is probably due many present high delta convertibles which carry high equity<br />

market risk. Coefficient on post-Lehman dummy variable is highly negative that shows convertible bonds were<br />

quite undervalued and under price pressure during this period.<br />

106


Model I Model II Model III<br />

emkt 0,65 0,64 0,63<br />

(39,55) (37,63) (33,71)<br />

term -0,35 -0,27 -0,37<br />

(-4,86) (-3,60) (-5,08)<br />

def 1,10 1,51 1,52<br />

(7,78) (9,17) (7,62)<br />

vix gr -1,19 -1,06 -1,02<br />

(-3,24) (-2,68) (-2,56)<br />

Recession<br />

NaN<br />

-1,28<br />

(-5,04)<br />

NaN<br />

Post-Lehmann<br />

NaN NaN<br />

-1,19<br />

(-3,02)<br />

Adj R2 0,14 0,14 0,14<br />

Table 2: Conventional Model for the individual convertible bonds. Fixed-effect panel data estimation method.<br />

Before starting adding the hedge fund demand into regression analysis, we investigate the Granger-causality<br />

relationship between hedge fund demand and convertible bond returns in order to avoid endogeneity bias issues and<br />

detect true causality relationship. The results of the Granger test are reported in the Table 3. We carry four model<br />

specifications. First method is directly to test the causality between convertible bond return index and hedge fund<br />

demand. This is the Model I in the Table. Second method is to establish causality between hedge fund demand and<br />

residuals from the conventional multi-factor model for convertible bond index. This is the Model II in the Table.<br />

Model III is the Grange-cause test of OLS residual from the conventional model with recession dummy of<br />

convertible bond index against hedge fund demand. Model IV is the Grange-cause test of OLS residual from the<br />

conventional model with post-Lehman dummy of convertible bond index against hedge fund demand. The<br />

consideration of residual instead of bond return is important to derive the causality relationship free of other<br />

systematic risk factors. F,G are values of F-statistic. Fc and Gc are the critical value from the F-distribution. The<br />

rule is such that if F>Fc then we cannot reject null hypothesis that convertible bonds causes hedge funds demand;<br />

if G>Gc then we cannot reject null hypothesis that hedge fund demand causes convertible bond returns. The test is<br />

conducted for 5 lags and at 5 percent level of significance.<br />

Model I Model II Model III Model IV<br />

F 1,71 2,42 0,25 1,55<br />

F crit value 3,96 3,11 3,96 3,96<br />

G 0,90 5,26 6,18 4,52<br />

G crit value 3,96 3,11 3,96 3,96<br />

Table 3: Granger Causality test between Convertible bond Index and hedge fund demand generated by assets under management of convertible<br />

arbitrage hedge funds normalized by the convertible bond index market capitalization.<br />

In no model specification, we found convertible bonds cause the hedge fund demand fluctuation. In case of<br />

Model I, there is causality in other direction either which can be explained by interaction with other factors.<br />

However whenever we control for the additional factors, we cannot reject the null hypothesis that hedge fund<br />

demand causes convertible bond returns. This test assures us that endogeneity bias should not be an issue whenever<br />

we study the effect of hedge fund demand on the convertible bond returns.<br />

5.2 Price Pressure Factor<br />

As we establish some impact of convertible arbitrage hedge fund on the convertible bond returns, we would like to<br />

find if the price premium exists for convertible bonds with different sensitivities towards demand and moreover<br />

construct the risk factor as a top minus bottom decile portfolio of convertible bonds sorted by their sensitivity.<br />

First for each individual convertible bond issue, we estimate its sensitivity to hedge fund demand on the sample<br />

window of either 12 months or 24 months. Sensitivity is estimated within different model misspecification. First,<br />

107


we consider univariate framework. Second, we add convertible bond index capturing the systematic risk factor.<br />

Finally, as model III, we use our conventional model. Results are reported in Table 4.<br />

Model Model II Model III Model IV Model V<br />

Cbind 0,33<br />

(28,61)<br />

NaN NaN NaN<br />

emkt 0,65 0,64 0,64<br />

(40,83) (39,06) (37,62)<br />

def -0,29 -0,19 -0,32<br />

(-3,72) (-2,38) (-4,01)<br />

term 1,04 1,43 1,35<br />

(7,04) (8,42) (6,10)<br />

vix -1,03 -0,77 -0,79<br />

(-3,35) (-2,25) (-2,12)<br />

HF demand -0,41 -0,15 -0,07 -0,09 -0,05<br />

(-12,20) (-4,38) (-2,10) (-2,64) (-1,44)<br />

D1recession -1,24<br />

(-4,96)<br />

NaN<br />

D2Lehmann<br />

NaN NaN NaN<br />

-0,78<br />

(-1,97)<br />

AdjR2 0,01 0,06 0,14 0,14 0,14<br />

Table 4: OLS estimates of the sensitivities of individual convertible bonds toward hedge fund demand.<br />

As a portfolio formation, we use next month out-of-sample returns. We rank our convertible bonds according<br />

to their sensitivities and form the equally-weighted deciles portfolios. Then we compute the next month out-ofsample<br />

return of this portfolio. Differential between returns of such portfolio will prove the existence of price<br />

premium for convertible bond factor and that it is priced in equilibrium.<br />

We finally select bonds with top 10 and bottom 10 statistically significant value of estimated sensitivity (at 5<br />

percent level). The portfolio consists of equally-weighted long position in top 10 and short position in bottom 10<br />

bonds. We compute the out-of-sample return of this portfolio at next month. We also report alphas of top and low<br />

decile portfolio and show the alphas of factor mimicking portfolio as a difference between top and bottom alphas<br />

(for the estimation window of 12 month). Positive alphas indicates the existence of risk premium.<br />

Window 12 Model I Top Model I Low AlphaTop-Low<br />

Alpha 0,13<br />

-0,19<br />

0,31<br />

(0,20)<br />

(-0,19)<br />

Emkt 0,57<br />

0,75<br />

(6,59)<br />

(5,63)<br />

Term 0,07<br />

-0,42<br />

(0,29)<br />

(-1,12)<br />

Def -0,02<br />

0,50<br />

(-0,04)<br />

(0,67)<br />

Vix -1,53<br />

-5,21<br />

(-0,65)<br />

(-1,45)<br />

AdjR2 0,56 0,53<br />

Window 12 Model II Top Model II Low Alpha Top-Low<br />

Alpha 0,17<br />

-0,86<br />

1,03<br />

(0,23)<br />

(-0,75)<br />

emkt 0,57<br />

0,71<br />

(6,37)<br />

(5,18)<br />

term 0,07<br />

-0,36<br />

(0,27)<br />

(-0,95)<br />

def -0,08<br />

1,44<br />

(-0,12)<br />

(1,33)<br />

108


vix -1,50<br />

-5,53<br />

(-0,64)<br />

(-1,54)<br />

Dummy 0,14<br />

-2,02<br />

(0,13)<br />

(-1,20)<br />

AdjR2 0,55 0,53<br />

Window12 Model III Top Model III Low Alpha Top-Low<br />

Alpha 0,37<br />

-0,80<br />

1,17<br />

(0,48)<br />

(-0,67)<br />

emkt 0,61<br />

0,64<br />

(5,39)<br />

(3,72)<br />

term 0,07<br />

-0,43<br />

(0,30)<br />

(-1,12)<br />

def -0,86<br />

2,19<br />

(-1,17)<br />

(1,45)<br />

vix -4,07<br />

-7,91<br />

(-1,81)<br />

(-1,72)<br />

Dummy 1,76<br />

-3,70<br />

(0,95)<br />

(-0,97)<br />

AdjR2 0,71 0,48<br />

Table 5: Performance of the decile portfolios sorted by the sensitivities.<br />

For future research and estimation steps, we estimate the value of risk premium. and test the price pressure risk<br />

factors as a missing risk factor for convertible arbitrage hedge funds. Moreover we investigate differential of price<br />

pressure for different types of convertibles with different greeks.<br />

6 Summary<br />

This paper sheds the light on the previously unnoticed effect of hedge funds on the asset pricing of their<br />

investment instruments. Demand fluctuations from the convertible arbitrage hedge funds generate a price<br />

pressure risk on convertible bonds pricing. This risk is materialized into forced selling risk factor during market<br />

turbulence and liquidity. We aim at constructing the price pressure risk factor based on the sensitivity of<br />

individual convertible bonds to the demand of hedge funds through long/short factor mimicking portfolio and<br />

showing its impact on the convertible pricing and on a risk models for a convertible bond portfolio. In<br />

construction of the factor we rely on information about the total assets under management of convertible<br />

arbitrage hedge funds. We test the significance of price pressure as a priced risk factor for convertible bonds.<br />

Further analysis is required to estimate corresponding risk premium and identify price pressure as a missing risk<br />

factor for pricing models of convertible arbitrage hedge funds.<br />

7 References<br />

Ammann, M., Kind, A. & Wilde, C. (2003). Are convertible bonds underpriced? An analysis of the French market.<br />

Journal of Banking and Finance, 27,635-653.<br />

Ammann, M.,Kind, A. & Seiz, R. (2007). What Drives the Performance of Convertible-Bond Funds. Working Paper<br />

Agarwal, V., & Naik, N. (2004). Risk and Portfolio Decisions Involving Hedge Funds. The Review of Financial<br />

Studies, 17, 63-98.<br />

Fung, W. & Hsieh, D. (2004). Hedge Fund Benchmarks: A Risk Based Approach. Financial Analyst Journal, 60,<br />

65-80.<br />

Hasanhodzic, J.& Lo, A. (2004). Can hedge-fund returns be replicated? The linear case. Journal of Investment<br />

Management, 2, 5-41<br />

109


THE PRICING OF EQUITY-LINKED CONTINGENT CLAIMS UNDER A LOGNORMAL<br />

SHORT RATE DYNAMICS<br />

Rosa Cocozza and Antonio De Simone,<br />

University of Napoli “Federico II”, Italy<br />

Email: rosa.cocozza@unina.it, a.desimone@unina.it,<br />

www.docenti.unina.it/rosa.cocozza<br />

Abstract. We propose a numerical procedure for the pricing of financial contracts whose contingent claims are exposed to two sources of<br />

risk: the stock price and the short interest rate. More precisely, in our pricing framework we assume that the stock price dynamics is<br />

described by the Cox, Ross Rubinstein (CRR, 1979) binomial model under a stochastic risk free rate, whose dynamics evolves over time<br />

accordingly to the Black, Derman and Toy (BDT, 1990) one-factor model. To this aim, we set the hypothesis that the instantaneous<br />

correlation between the trajectories of the future stock price (conditional on the current value of the short rate) and of the future short rate is<br />

zero. We then apply the resulting stock price dynamics to evaluate the price of a simple contract, i.e. of a stock option. Finally, we compare<br />

the derived price to the price of the same option under different pricing models, as the traditional Black and Scholes (1973) model. We<br />

expect that, the difference in the two prices is not sensibly large. We conclude showing in which cases it should be helpful to adopt the<br />

described model for pricing purposes.<br />

Keywords: option pricing, stochastic short rate model, binomial tree<br />

JEL classification: C63, C65, G13<br />

1 Introduction<br />

In the modern option pricing theory many attempts have been accomplished in order to release some of the<br />

traditional assumptions of the Black and Scholes (1973) model and, in general, to develop a pricing framework<br />

depending not only on the solely underlying dynamics . Very distinguished in this field are the models allowing for<br />

stochastic interest rate, as suggested for the first time by Merton (1973). Afterward, Brennan and Schwartz (1980)<br />

proposed a stochastic interest rate model to evaluate the price of convertible bonds. Within this context, if the<br />

conversion date coincides with the bond maturity, the future value of the bond is certain and no assumption on the<br />

interest rate dynamics is necessary. In this case the issues faced in evaluating the convertible are not sensibly<br />

different from those arising in determining the market value of an equity-linked endowment policy. On the contrary,<br />

if the conversion occurs before the maturity, some assumptions about the future dynamics of the term structure are<br />

necessary because, in this case, the future value of the bond is a random variable depending on the unknown level of<br />

the interest rates at the conversion date.<br />

Even in the insurance segment the specification of the interest rate dynamics is very helpful. Recently, life<br />

insurance companies issued policies that typically combine a guaranteed return with a contingent spread on a<br />

reference asset return paid out to the policyholder under particular conditions. This is the case of participating life<br />

insurance policies allowing policyholders to gain a defined amount and at the same time to participate to the<br />

eventual additional profit margin of the insurance company according to a predefined participation rate. As showed<br />

elsewhere (Cocozza and Orlando, 2007; Cocozza et al. 2011), these contracts embed an option that exposes the<br />

issuer to two sources of financial risk: the return on the reference asset and the appropriate discount factor. On the<br />

contrary, if we consider an equity-linked endowment policy, the equilibrium price, as Brennan and Schwartz (1976)<br />

showed in their seminal paper, is equal to the sum of the present value of a zero coupon bond and those of an<br />

immediately exercisable call option on the reference asset or, alternatively, to the present value of the reference asset<br />

plus that of an immediately exercisable put option on the same reference asset. As a matter of fact, such a kind of<br />

contracts is not an insurance policy but a proper financial product issued not only by the life insurance companies<br />

but also by other financial institutions.<br />

In this article we propose an innovative numerical procedure for the pricing of financial contracts whose<br />

contingent claims are exposed to two risk sources: the stock price and the interest rate. More precisely, in our pricing<br />

framework we assume that the stock price dynamics is described by the Cox, Ross and Rubinstein (1979) binomial<br />

model (CRR) under a stochastic risk free rate, whose dynamics evolves over time accordingly to the Black, Derman<br />

and Toy (1990) one-factor model (BDT). The BDT model avoids some drawbacks that typically affect equilibrium<br />

model of the term structure, such as negative spot interest rates. At the same time it offers the relevant opportunity to<br />

efficiently calibrate the risk factor trajectories, preventing from the adoption of parameters not allowing the<br />

endogenous reproduction the observed term structure. Last but not least, the conjoint adoption of CRR and BDT<br />

110


models may sensibly reduce the computational efforts in the estimation of the parameters by adopting implied<br />

volatility measures.<br />

The paper is structured as follows. In section 2 main contributions to option pricing theory in a stochastic<br />

interest rate framework are reported together with a brief illustration of the BDT and the CRR models. Section 3<br />

reports the main assumptions and the mechanics of the numerical procedure adopted to determine the price of a<br />

plain vanilla call option. Section 4 shows some numerical examples while section 5 concludes the paper with main<br />

final remarks.<br />

2 Option pricing models with stochastic interest rate<br />

As stated, a stochastic interest rate model for option pricing was proposed for the first time by Merton (1973),<br />

where the Gaussian process was adopted to describe the continuous-time short rate dynamics. The adoption of a<br />

Gaussian process was very common in the ’80s and in the early ‘90s before the advent of the lognormal term<br />

structure models. A discrete time dynamics for short rate process equivalent to those adopted by Merton was<br />

subsequently discussed by Ho and Lee (1986), while other option pricing formulae under Gaussian interest rate were<br />

introduced by Rabinovitch (1989) and Amin and Jarrow (1992).<br />

The success of the Gaussian models of the term structure relies on the mathematical tractability and thus on the<br />

possibility of obtaining closed formulas and solutions for the price of stock and bond options. In fact, the Gaussian<br />

process was for the first time adopted to derive the price of bond options by Vasicek (1977). Furthermore, the<br />

calibration of the Gaussian models does not require particularly demanding computational effort.<br />

Although the Gaussian models have been very successful for research purposes, some relevant inner drawbacks<br />

prevented their diffusion among the practitioners, as for example the possibility for the interest rate trajectories to<br />

assume negative values. In response, other equilibrium models for the interest rate term structure have been<br />

developed. One of this is the well-known Cox, Ingersoll and Ross (1985) model (CIR), where the interest rate<br />

dynamics is described by a square root mean reversion process that, under the Feller condition (Feller, 1951), does<br />

not allow the interest rate to become negative. CIR dynamics has subsequently been adopted also to describe the<br />

stochastic short rate framework for pricing stock option (Kunitomo and Kim, 1999 and 2001) and for pricing<br />

endowment policies, with an asset value guarantee, where the benefit is linked to fixed income securities (Bacinello<br />

and Ortu, 1996).<br />

However, as showed elsewhere (De Simone, 2010), equilibrium models are in general not able to ensure an<br />

efficient calibration of the interest rate dynamics, because they are based on a limited number of parameters, in<br />

general unable to guarantee an acceptable fitting of the model prices to market prices. Moreover, a satisfactory<br />

calibration of the model is sometimes a hard task, because many equilibrium models rely on an instantaneous<br />

interest rate (spot or forward) that, in general, is not directly observable on the market. The relevance of this<br />

problem increased over time, especially after the diffusion of standard market practices of pricing derivatives within<br />

the Black and Scholes environment (Black and Scholes, 1973; Black, 1976).<br />

As mentioned, limitations of the equilibrium models may be overcome by market models, as for example the<br />

BDT model and the Libor Market Model (Brace et al. 1997). A particular characteristic of both models is the<br />

assumption of lognormal interest rate dynamics, even if this hypothesis applies only asymptotically for the BDT<br />

model. Such feature allows for a satisfactory calibration by adopting implied volatility measures according to the<br />

standard market practices. However, opposite to many equilibrium models, market models do not in general allow to<br />

obtain closed price formulas so that the price of interest rates derivatives has to be evaluated numerically. Between<br />

the two mentioned models, we choose the BDT because of its simplified approach in pricing derivatives. The BDT<br />

model allows to obtain a binomial tree for the dynamics of the Libor rate by adopting, as input data, the term<br />

structure of interest rates and of the corresponding volatilities. An exhaustive explanation of the procedure adopted<br />

for the construction of the tree is reported in Neftci (2008). Figure 1 shows an hypothetical BDT tree for the 12monts<br />

spot Libor rate L(t, s), where s – t = 12 months for each t, s.<br />

At time t = 0 the 12-months spot Libor rate L(0,1) is directly observable and therefore not stochastic. After one<br />

year, at time t = 1, the following 12-months Libor rate L(1,2) can go up to the level L(1,2)u or down to the level<br />

L(1,2)d. Similarly, at time t = 2 the Libor rate L(1,2)j, in the state of the world j=u, d, may go up or down with equal<br />

risk neutral probability. We finally notice that since L(t, s)ud=L(t, s)du the BDT tree is recombining.<br />

In the next section we show how the information from the BDT tree is employed to describe the dynamics of<br />

the interest rate adopted as risk free rate in the CRR model. We choose such models for two main reasons: (1) both<br />

the models are based on a binomial tree and (2) both the risk factors are lognormal in the limit.<br />

111


Figure 1. The BDT tree of the 12-months Libor rate L(t, s).<br />

An important property of the implied stock tree is that the rate at which the stock price increases/decreases (u/d)<br />

j<br />

is constant over time, so that ST � St<br />

j , j=u,d. As shown by Cox et al. (1979), the risk neutral probability p is:<br />

m � d<br />

p �<br />

u � d<br />

where m=1+L(t, T), and where u∙d=1 because also the CRR tree is recombining. Moreover, we remark that the<br />

discrete time dynamics approximates the Black and Scholes dynamics of the stock price, according to the following<br />

stochastic differential equation (SDE):<br />

St � �St � ��<br />

CRR St<br />

��<br />

t<br />

where St is the stock price at time t, μ is the drift of the process Δ is the time distance between two observations,<br />

� CRR is the instantaneous volatility coefficient and � t can be interpreted as the outcome of a binomial random<br />

variable under the natural probability. Notice that as Δ→0, � t tends in distribution to a standard normal random<br />

� (1)<br />

variable so that t � � can be interpreted as a Wiener increment. As a result, the final stock price tends in distribution<br />

to a lognormal random variable. This property entails that the adoption of implied volatility measures for pricing<br />

equity linked contingent claims ensures a satisfactory calibration of the model to the market data. Of course, as the<br />

drift change, the expected value of the stock price changes too because of the subsequently modification of the<br />

probability space. On the contrary, the levels of the future stock prices in each state of the world only depend on the<br />

current stock price and on the volatility parameter � CRR . In the next section we adopt the Libor rate as risk free rate<br />

in evaluating equity linked contingent claims. Despite the Libor rate refers to banks AA or AA- rated, we consider it<br />

as a proxy of the risk free rate. In section 5 we study how to release this assumption and a possible solution to this<br />

issue is proposed.<br />

3 The procedure<br />

In the majority of cases, when there is a two risk factor pricing model we have to ascertain:<br />

1. the dynamics according to which the two factors evolve;<br />

2. the measure of the correlation between the two dynamics;<br />

3. the estimate of relevant parameters under risk neutral environment.<br />

The three tasks can be extremely difficult to perform properly. As a consequence, a certain level of approximation is<br />

often required. Even the simulation based on copula function, not always available with reference to the distribution<br />

under observation, can be very questioning with reference to the choice of an appropriate correlation measure. In<br />

this order of ideas, we developed a numerical procedure to get the arbitrage free price of a European call option in a<br />

stochastic short rate framework. To begin with, we notice that the numerical procedure adopts the results from the<br />

BDT and from the CRR models, whose assumptions hold also for our model.<br />

112


3.1 Main assumptions<br />

According to the stated environment (section 2), the arbitrage free dynamics of the stock price is described by<br />

equation 1. Therefore the risk neutral dynamics of the stock price is defined as follows:<br />

p<br />

� St � r�t<br />

�S<br />

t � ��<br />

CRR St<br />

��<br />

t (2)<br />

where r(t) is the instantaneous risk free interest rate at time t under the corresponding appropriate risk neutral<br />

probability p. The crucial point in our approach is that the instantaneous risk free rate is piecewise constant and<br />

evolves over time according to the BDT dynamics, since the spot rate r(t) is not a random variable unless the tenor<br />

(δ) of the rate matures.<br />

In other words, once we choose a term structure (in our examples the Libor rate term structure), the variability<br />

of the rate can be observed in practice according to the ‘nodes’ of the term structure. Therefore, if the rate is, as in<br />

our case, the Libor, the variability time interval goes from 1 week up to 12 months and coincides with the tenor the<br />

chosen rate. Therefore, once we adopted the 12 months rate, the dynamics over time on a discrete time interval is<br />

equal to 12 months. This accounts for a piecewise constant dynamics. The length of the time interval of the process<br />

describing the rate dynamics is therefore set according to the relevant node of the term structure. Theoretically, any<br />

node could be used, but if we decide to go for a BDT application we need also volatility data. Since not all the nodes<br />

have the same liquidity, the significance of the corresponding implied volatility is not the same across the term<br />

structure. We are therefore forced towards those nodes showing the maximum liquidity, since this guarantees the<br />

more efficient measure of implied volatility. If we assume (or better observe) that the most liquid node is 12 months,<br />

we will adopt a BDT model on a year basis. As a consequence on a certain time horizon (longer than one year in the<br />

case under estimation), we will have an array of one year rates defining a corresponding set of probability spaces<br />

which can be used for evaluating the stock price dynamics. As a consequence, the evaluating numeraire is piecewise<br />

constant and in a sense “rolls over time” according to the term structure tenor.Therefore what can be regarded as a<br />

random variable at the evaluation date is the δ-rate. As a consequence, the instantaneous risk free rate r(t) and the δ-<br />

Libor rate L(t, T), with δ = T – t, are connected as follows:<br />

1<br />

exp��<br />

r�t<br />

�n��<br />

� , t


Figure 2. The joint PDF of the short rate L(t, T) and of the stock price St in the case of zero correlation.<br />

Now, let us consider the case in which the short rate increases to L(1,2)u and also the stock price increases to<br />

u<br />

S1 . Since the two events are independent, the probability that they occur contemporaneously is:<br />

u<br />

u<br />

�S L�1,<br />

2�u<br />

| S , L�0,<br />

1��<br />

� Pr ob S | S , L�0,<br />

1�<br />

� �* Pr ob L�1,<br />

2�u<br />

| S , L�0,<br />

1�<br />

Pr ob 1 � 0<br />

1 0<br />

� 0<br />

1<br />

� � p1<br />

* . This procedure may be repeated<br />

2<br />

for each possible couple of the short rate and stock price at time t=1.<br />

Now, let us consider the state of the world where stock price is u<br />

S1 and the Libor rate is L(1,2)u. At the<br />

successive time step, t=2, the stock price may again increase to<br />

2<br />

S 2 S0u<br />

uu � with risk neutral marginal probability<br />

p2 or decrease to S 2 S0ud<br />

S0<br />

ud<br />

u<br />

m2<br />

� d<br />

� � with probability 1� p2<br />

, with p2<br />

� and m L u<br />

u � d<br />

u<br />

2 � 1�<br />

( 1,<br />

2)<br />

. At the same<br />

time, the Libor rate may increase to L(2,3)uu or decrease to L(2,3)ud with equal probability. The probability that<br />

both the stock price and the Libor rate increase for two consecutive times is therefore:<br />

uu<br />

u<br />

1 1<br />

Pr ob�S<br />

2 � L�2,<br />

3�uu<br />

| S1<br />

, L�1,<br />

2�u<br />

��<br />

p1<br />

* p2<br />

* * . We repeat this procedure until, at the end of the second time<br />

2 2<br />

interval, all the 16 final states of the world and their respective probabilities are available, as described in table 1.<br />

State of the<br />

world<br />

Libor rate Stock price Probability<br />

1 L(2,3)uu uu<br />

S 2<br />

2 L(2,3)ud uu<br />

S 2<br />

3 L(2,3)du uu<br />

S 2<br />

4 L(2,3)dd uu<br />

S 2<br />

5 L(2,3)uu ud<br />

S 2<br />

6 L(2,3)ud ud<br />

S 2<br />

7 L(2,3)du ud<br />

S 2<br />

8 L(2,3)dd ud<br />

S 2<br />

9 L(2,3)uu du<br />

S 2<br />

10 L(2,3)ud du<br />

S 2<br />

11 L(2,3)du du<br />

S 2<br />

114<br />

h1<br />

h2<br />

h3<br />

h4<br />

h5<br />

h6<br />

h7<br />

h8<br />

h9<br />

h10<br />

h11


12 L(2,3)dd du<br />

S 2<br />

13 L(2,3)uu dd<br />

S 2<br />

14 L(2,3)ud dd<br />

S 2<br />

15 L(2,3)du dd<br />

S 2<br />

16 L(2,3)dd dd<br />

S 2<br />

Table 1: An example of the discrete joint probability density function of the stock price and of the Libor rate<br />

The current stock price S0 and the future stock price S 2 are thus linked as follows:<br />

h � S 2 � 1<br />

S0<br />

� E0<br />

� �<br />

�1�<br />

L�1,<br />

2��<br />

1�<br />

L�0,<br />

1�<br />

(4)<br />

where h is the joint probability measure associated to the future values of the stock price and of the Libor rate and<br />

h<br />

E0 denotes the expected value under the probability measure p conditional on the information available at the<br />

valuation date t=0.<br />

Equation (4) states that the current stock price S0 is the expected value of the future stock price S 2 discounted<br />

at an appropriate stochastic interest rate. We notice that the discount factor is only partly included in the expectation<br />

operator because, at time t=0, the current Libor rate L(0,1) is known while the Libor rate that will run during the<br />

period from 1 to 2 is a random variable. Since the discount factor and the stock price are independent random<br />

variables, it can be shown that:<br />

� �<br />

� � ��2<br />

p<br />

E0<br />

S 2<br />

1�<br />

L 0,<br />

2<br />

p q � 1 � 1<br />

S0<br />

� E0<br />

�S 2 �E0<br />

� � �<br />

(5)<br />

�1�<br />

L�1,<br />

2��<br />

1�<br />

L�0,<br />

1�<br />

Equation (5) implies a very important property: the probability measure h, defined as the joint probability<br />

associated to the future values of the stock price and of the Libor rate, is such that the stock price is a martingale.<br />

Once the joint PDF of the final stock price and of the Libor rate is obtained, it is quite a simple task to determine the<br />

price C0 of a European call option at time t=0, with strike price X:<br />

�<br />

h<br />

C0<br />

� E0<br />

�<br />

��<br />

� �S 2 � X � � 1<br />

�<br />

1�<br />

L�1,<br />

2�<br />

1�<br />

L�0,<br />

1�<br />

��<br />

4 Numerical examples<br />

(6)<br />

Equation 6 implies that the price of a call option is approximately equal to the price of the same option<br />

calculated by means of the CRR model. In other words, the model here developed can be regarded as an extension<br />

of the implied stock tree model on a roll-over basis. However, this could not be the case if we release the hypothesis<br />

of zero correlation between interest rate and stock price, because in the original CRR model the interest rate is not<br />

stochastic and therefore there is no reason for any relationship between the rate and the stock price. On the contrary,<br />

a change of the term structure of interest rates produces a difference in the price of the call option, since that change<br />

can be regarded as the adoption of a different parameter. Therefore, there is a certain interest in evaluating pricing<br />

differences emerging from the implementation of one model against the other. To this aim it is necessary to get the<br />

joint PDF of the stock price and of the interest rate.<br />

Let us consider the simple framework of the previous section where n=2, δ=1. We set the stock price equal to<br />

100 and its implied annual volatility to 20%. Finally, we assume that the spot Libor rate is 1% for the first year and<br />

2% for the second year and 3% for the third year, while the term structure of the volatility is equal to 20% and 30%<br />

for the second and the third year respectively. All the parameters are reported in table 2.<br />

115<br />

h12<br />

h13<br />

h14<br />

h15<br />

h16


Parameters<br />

Time (years)<br />

0 1 2<br />

S0 100 - -<br />

� T �<br />

L 0, 1% 2% 3%<br />

� CRR - 20% 20%<br />

� BDT - 20% 30%<br />

Table 2: Parameters adopted for the calibration of the model.<br />

Two remarks are necessary. Firstly, we notice that the Libor rate for maturities over 12 months is not directly<br />

observable on the market. Those values can however be obtained from the swap curve on the Libor rate by means of<br />

bootstrap technique (details in Hull, 2009). Secondly, we notice that to the aim of pricing a plain vanilla call option<br />

it is not necessary to specify the 3-year interest rate. We decide however to consider it to show the final joint PDF of<br />

the interest rate and stock price. Such PDF may be adopted to compute the price of financial products whose value<br />

depends contemporaneously on the level of stock price and interest rate at the maturity, as for example convertible<br />

bonds.<br />

Figure 3a and 3b show respectively the BDT interest rate tree and the binomial stock tree (with marginal<br />

probabilities) calculated adopting the parameters shown in table 2.<br />

Figure 3a. Example of the BDT tree Figure 3b. Example of the binomial stock tree<br />

Given the values in figure 3a and 3b, it is a simple task to determine the joint PDF of the interest rate and of the<br />

stock tree. For example, the probability that the stock price will be 100 at the end of the 2 year jointly with an<br />

interest rate level of 9.3% is 0.25*0.4998.<br />

We remark that, if we set the hypothesis of zero correlation, the price of the call option calculated by means of<br />

the procedure exposed in section 3 coincide with the price of the same option calculated by means of the CRR<br />

model in the case where the term structure of the interest rate is not flat. More precisely, the risk free rate for the first<br />

year coincides with the corresponding spot rate while the risk free rate for the second year coincides with the<br />

corresponding forward rate. In this way we are able to incorporate market expectations (at the valuation date) on the<br />

future interest rates in the pricing of option.<br />

To show how interest rate expectations affect the stock option pricing, we compare the price of a call option,<br />

derived in our framework, to the Black and Scholes price. We decide to adopt the Black and Scholes model for the<br />

comparison firstly, because the CRR price tends to the Black and Scholes price as Δ→0, and secondly, because the<br />

Black and Scholes price is not able to capture the expectations on the future interest rates.<br />

116


Table 3 reports the differences (in percentage) between the price of a two year ATM vanilla call option<br />

calculated by means of our procedure (P) and the price of the same option calculated by means of the Black and<br />

Scholes (1973) model (Bls), according to the following formula:<br />

P � Bls<br />

* 100 .<br />

Bls<br />

Such differences are calculated for different term structures of the interest rates. The other parameters adopted<br />

to evaluate the price differences are those reported in table 2 but this time we consider, for each year, a higher<br />

number of steps for the stock tree. More precisely, since the stock exchange is open about 254 days per year, we<br />

thus set n=508 (Δ=1/254).<br />

We thus notice that if the term structure is flat (see the numbers on the diagonal of table 3), the percentage<br />

difference with the Black and Scholes formula is quite negligible, from .03% to .05%. However, as expected, such<br />

differences tend to increase as the difference between the interest rates for the two maturities increases. If, on the<br />

contrary, L(0,1)L(0,2)) the term structure is upward (downward) sloping and the price differences<br />

are positive (negative).<br />

5 Final remarks<br />

1% 2% 3% 4% 5%<br />

1% 0.05% 6.96% 13.10% 18.58% 23.48%<br />

2% -7.41% 0.04% 6.68% 12.60% 17.90%<br />

3% -15.12% -7.10% 0.04% 6.41% 12.11%<br />

4% -23.06% -14.45% -6.80% 0.04% 6.15%<br />

5% -31.22% -22.01% -13.82% -6.51% 0.03%<br />

Table 3. Price differences with respect to the Black and Scholes (1973) formula for different interest rate term structures.<br />

This paper shows a procedure to determine the price of financial contracts that are exposed to two sources of<br />

risk: the stock price and the interest rate. In particular, we assume that each risk factor evolves over time according<br />

to a binomial tree so that the final distribution is, in the limit for both risk factors, lognormal. To this aim, we set<br />

some hypotheses and in particular we assume that the correlation between the interest rate and the stock price is zero<br />

and that the Libor rate proxies for the risk free rate. We showed that under this assumptions, the stock price is a<br />

martingale under a particular (joint) probability measure that results from the product of the risk neutral marginal<br />

probabilities of the two considered risk factors. Even if this assumptions appear to be clearly unrealistic, we<br />

therefore set them to simplify the pricing approach. In this section, some techniques are proposed in order to release<br />

such hypotheses.<br />

In particular, the assumption of zero correlation between the interest rate and the stock price may be released by<br />

redistributing, in a different way, the joint probabilities calculated in the case of independency that are exposed in<br />

figure 2, among the possible states of the world. For example, we can set the hypothesis that the stock price and the<br />

interest rate show perfectly negative correlation by equally distributing the probabilities (in case of independence) of<br />

contemporaneously up movements and down movements of stock price and rate to the other two states of the world,<br />

according to figure 4. The terminal stock price dynamics will thus result in a binomial random variable and, in the<br />

limit, is lognormal.<br />

117


Figure 4. The joint PDF of the short rate L(t, T) and of the stock price St in the case of perfectly negative correlation.<br />

We notice that, if this is the case, the instantaneous correlation will be -1 but the terminal correlation may<br />

depend not only on the way the probabilities are redistributed, but also on the variances of the two risk factors. For<br />

these reasons, more sophisticated techniques must be applied in order to calibrate the model to the empirical<br />

correlation between interest rate and stock price. Finally we point out that, if a similar technique can be applied to<br />

impose perfectly positive correlation between the two risk factors, it is a harder task to fit the correlation parameter<br />

to the observed market data. In this case the problem is twofold. Firstly, the correlation between interest rate and<br />

stock price must be estimated. Secondly, we have to choose a redistribution rule (different from the above<br />

mentioned rule) such that the correlation of the model is equal to the estimated correlation. This second point can be<br />

solved in a very simple way. In the case of perfectly negative correlation, we set the probability of contemporaneous<br />

up and down movement of the two risk factor equal to zero. If on the contrary we decide to equally redistribute to<br />

the other two states of the world only a percentage of the probability of contemporaneous up and down movement,<br />

it will result in a correlation equal to -γ. If we thus decide to equally redistribute only a 50% of the probability of<br />

contemporaneously up and down movement to the other states of the world, it will result in a correlation of -0.5. On<br />

the contrary, if we decide to equally distribute a percentage γ of the probability of opposite movements of the<br />

considered risk factors to the other two states of the world, it will result in a correlation equal to γ.<br />

We notice however that by relaxing the hypothesis of zero correlation, equation 5 does not hold and the stock<br />

price dynamics does not expose the martingale properties. However, even if the current stock price were still an<br />

unbiased estimate of the future stock price, it can be noticed that relaxing the hypothesis of zero correlation may<br />

produce a price for a call option different from those that can be obtained by adopting the Cox, Ross and Rubinstein<br />

(1979) model.<br />

The second strong hypothesis of our model is in the adoption of the Libor rate as risk free rate. Since the Libor<br />

rate is in general higher than a AAA interest rate, its adoption will result in higher (lower) price for a call (put)<br />

option. To solve this problem, we can however consider the Libor rate as formed by the sum of a basic risk free rate<br />

R(t, T) and a spread φ(t, T). In this case the risk free rate R(t, T) will be a random variable composed as follows:<br />

Assuming that the spread is not a random variable and that it is constant over time at a certain level, it will only<br />

be function of the tenor of the interest rate δ, and the risk free rate R(t, T) will be a random variable with the same<br />

distribution of L(t, T), the same variance but a lower average.<br />

6 Aknowledgments<br />

The authots thank Dr. D. Curcio for valuable research assistance. Although the paper is the result of a joint<br />

effort of the authors, sections 1 and 3.1 are due to R. Cocozza, whilst sections 2, 3.2, 4 and 5 are due to A. De<br />

Simone.<br />

118


7 References<br />

Amin, K. I., Jarrow, R. A. (1992). Pricing options on risky assets in a stochastic interest rate economy.<br />

Mathematical Finance, 2(4), 217-237.<br />

Bacinello, A. R., Ortu, F. (1996). Fixed income linked life insurance policies with minimum guarantees: Pricing<br />

models and numerical results. European Journal of Operational Research, 91(2), 235-249.<br />

Black, F. (1976). The pricing of commodity contracts. The Journal of Political Economy, 3(1-2), 167-179.<br />

Black, F., Derman, E., Toy, W. (1990). The pricing of options and corporate liabilities. The Journal of Political<br />

Economy, 46(1), 33-39.<br />

Black, F., Scholes, M. (1973). The pricing of options and corporate liabilities. The Journal of Political Economy,<br />

81(3), 637-654.<br />

Brace, A., Gątarek, D., Musiela, M. (1997). The market model of interest rate dynamics. Mathematical Finance,<br />

7(2), 127-155.<br />

Brennan, M. J., Schwartz, E. S. (1976). The pricing of equity-linked life insurance policies with an asset value<br />

guarantee. Journal of Financial Economics, 3(3), 159-213.<br />

Brennan, M. J., Schwartz, E. S. (1980). Analyzing convertible bonds. Journal of Financial and Quantitative<br />

Analysis, 15(4), 907-929.<br />

Cox, J. C., Ross, S. A., Rubinstein, M. (1979). Option pricing: A simplified approach. Journal of Financial<br />

Economics, 7(3), 229-263.<br />

Cox, J. C., Ingersoll, J. E., Ross, S. A. (1985). A theory of the term structure of interest rates. Econometrica, 53(2),<br />

385-407.<br />

Cocozza, R., Orlando, A. (2009). Managing structured bonds: An analysis using RAROC and EVA. Journal of Risk<br />

Management in Financial Institutions, 2(4), 409-426.<br />

Cocozza, R., De Simone, A., Di Lorenzo, E., Sibillo, M. (2011). Participating policies: Risk and value drivers in a<br />

financial management perspectives. Forthcoming in the 14th Conference of the ASMDA International Society,<br />

7-10 June 2011.<br />

De Simone, A. (2010). Pricing interest rate derivatives under different interest rate modeling: A critical and<br />

empirical analysis. Investment Management and Financial Innovations, 7(2), 40-49.<br />

Feller, W. (1951). Two singular diffusion problems. Annals of Mathematics, 54(1), 173-182.<br />

Ho, T. S. Y., Lee, S.-B. (1986). Term structure movements and pricing interest rate contingent claims. The Journal<br />

of Finance, XLI(5), 1011-1029.<br />

Hull, J. C. (2009). Options, futures and other derivatives. Prentice Hall.<br />

Kim, Y.-J., Kunitomo, N. (1999). Pricing options under stochastic interest rates: A new approach. Asia-Pacific<br />

Financial Markets, 6(1), 49-70.<br />

Kunitomo, N., Kim, Y.-J. (2001). Effects of stochastic interest rates and volatility on contingent claims. CIRJE-F-<br />

129 (Extended Version of CIRJE-F-67, 2000), University of Tokyo.<br />

Merton, R. C. (1973). Theory of rational option pricing. The Bell Journal of Economics and Management Science,<br />

4(1), 141-183.<br />

Neftci, S. (2008). Principles of financial engineering. Academic Press, Advanced Finance Series.<br />

Rabinovitch, R. (1989). Pricing stock and bond options when the default-free rate is stochastic. Journal of Financial<br />

and Quantitative Analysis, 24(4), 447-457.<br />

Vasicek, O. (1977). An equilibrium characterization of the term structure. Journal of Financial Economics, 5(2),<br />

177-188.<br />

119


LIQUIDITY AND EXPECTED RETURNS: NEW EVIDENCE FROM DAILY DATA 1926-2008<br />

M. Reza Baradaran & Maurice Peat, The University of Sydney, Australia<br />

Email: rezab@econ.usyd.edu.au<br />

Abstract: This study analyzes the liquidity effect on stock expected returns in NYSE by using a new low-frequency proxy for<br />

transaction costs over 1926-2008 which is a longest available data. Past research has concentrated on post-1963 because of lack of<br />

data to construct liquidity measure. The results from the entire sample of 1926-2008 show that expected returns are increasing with the<br />

stock level illiquidity. However, liquidity level has explanatory power in cross-sectional variation of stock expected returns only over<br />

the post-1963 period and it is, both economically and statistically, insignificant for the whole sample and the pre-1963 period. These<br />

findings are robust after taking into account various characteristics such as size and risk controls including CAPM beta and Fama-<br />

French factors and also the systematic liquidity risk factor. Also the results for liquidity effect appear to be distinct from the size<br />

effect. On the other hand, evidence from the entire sample and pre-1963 suggest that systematic liquidity risk is significant in<br />

association with cross-sectional stock expected returns. These results are in favor of sample specific explanation for the illiquidity<br />

level premium documented in many studies for post-1963 period. Nevertheless, analysis over the whole period 1926-2008 shows that<br />

systematic liquidity risk plays significant role in cross-sectional variation of stock expected returns.<br />

Keywords: Liquidity, Asset pricing, Transaction costs, Effective spread<br />

JEL classification: G120<br />

1 Introduction<br />

Liquidity is important in many financial markets for both investors and policy makers and a large and growing body<br />

of work has considered identifying liquidity costs and their impacts on asset pricing (e.g. Amihud and Mendelson,<br />

1986; Gottesman and Jacoby, 2005; Korajczyk and Sadka, 2008 and Hasbrouck, 2009). A major problem in<br />

investigating the role of liquidity in asset prices is that while it is suggested to use long time-series in asset pricing<br />

studies 1 , the intra-day data that enable the estimation of transaction costs from the actual sequences of trades and<br />

quotes are not available prior to 1983 (in the US markets). Even the data related to the common low-frequency<br />

measures (such as quoted spreads) are not available prior to 1963. Therefore, current literature in liquidity-asset<br />

pricing concentrates mostly on the post-1963 period (in the US markets).<br />

This paper uses a new low-frequency proxy and examine the association of stock expected returns and liquidity<br />

from 1926 onwards. The proxy is Effective Tick4, henceforth EFFT, developed by Holden (2009) and intends to be<br />

a proxy for the intra-day effective spreads. Liquidity has many facets and it is important to note that EFFT, as a<br />

proxy for intra-day effective spread, captures only execution cost dimension of liquidity and do not include the total<br />

price impact of a trade. With this notation, we refer to EFFT as a proxy for liquidity and use the transactions costs<br />

(or trading costs) and liquidity interchangeably in this study.<br />

This examines the role of liquidity in explaining cross-sectional variation of stock expected returns in pre-1963,<br />

for which there is a lack of research, in post-1963, and over the entire period of 1926-2008. Using all common<br />

shares of NYSE over 1926-2008 and employing Fama-Macbeth approach in portfolios, we find that expected returns<br />

are increasing with the stock level illiquidity. However, this positive relation is not significant statistically at the<br />

conventional levels. Nonetheless, we find that a one percent (standard deviation) of liquidity level translates into a<br />

monthly expected premium of about 0.12 percent or 12 basis points. This result is robust after taking into account<br />

various characteristics such as size and risk controls including Fama-French three factors and the systematic<br />

liquidity factor. Results from the subsamples show that liquidity level has explanatory power in cross-sectional<br />

variation of stock expected returns only over the post-1963 period. This effect is both economically and statistically<br />

insignificant for pre-1963 period. On the other hand, evidence from the entire sample and pre-1963 suggest that<br />

market liquidity risk is marginally significant in association with cross-sectional stock expected returns. These<br />

findings suggest that liquidity statistically affect the cross-sectional variation of expected returns over the entire<br />

sample as well as the sub samples, though the channel for this effect seems to be different over the periods. For post-<br />

1963 the premium with respect to the liquidity level is more prevalent than that of the systematic liquidity risk. The<br />

1 In asset pricing studies, realised returns are usually used as the proxy for expected returns. Since the variance of realised returns around the<br />

expected returns is high, long time-series data provide a large amount of data that increase the power of asset pricing tests (Amihud et al., 2005).<br />

Accordingly, many asset pricing tests make use of US equity returns from 1926 onward (Hasbrouck, 2009).<br />

120


opposite is true over the pre-1963 period and the entire sample. These results provide evidence for the sample<br />

specific explanation for the illiquidity level premium documented in many studies for post-1963 period.<br />

Nevertheless, analysis over the whole period 1926-2008 shows that systematic liquidity risk plays significant role, as<br />

the liquidity effect, in stock expected returns.<br />

The paper proceeds as follows. Section 2 reviews the EFFT and its construction. Section 3 presents data and<br />

methodology that includes data description, variable and portfolio constructions, and asset pricing tests. Results are<br />

provided and discussed in section 4. Section 5 offers the concluding remarks.<br />

2 EFFT: a low-frequency proxy for the effective spreads<br />

EFFT, developed by Holden (2009) is the daily proxy for the effective spread and picks up on two attributes of the<br />

daily data: price clustering on trading days and reported quoted spreads for no-trade days. The proxy has two<br />

components corresponding to each of these attributes. The first component, effective tick, based on the observable<br />

price clustering is a proxy for the effective spread. The second component is the average quoted spread from any notrade<br />

days that exist and enriches effective tick by incorporating the information related to no-trade days. First we<br />

review the effective tick and then conclude the EFFT estimator. Effective tick is based on the idea that the effective<br />

spread on a particular day equals the increment of the price cluster on that particular day. For example on a $1/8<br />

fractional price grid, if the spread is $1/4, their model assumes that prices end on even-eights, or quarters. Thus, if<br />

odd-eight transaction prices are observed, one must infer that the spread must be $1/8. This implies that the simple<br />

frequency with which closing prices occur in particular price clusters (in a time interval) can be used to estimate the<br />

corresponding spread probabilities and ,hence, infer the effective spread for that interval. For example on a $1/8<br />

fractional price grid, the frequency with which trades occur in four, mutually exclusive price cluster sets (odd $1/8s,<br />

odd $1/4s, odd $1/2s, and whole dollars), can be used to estimate the probability of a $1/8 spread, $1/4 spread, $1/2<br />

spread, and a $1 spread, respectively. There are similar clusters of special prices on a decimal price grid (off<br />

pennies, off nickels, off dimes, off quarters, and whole dollars) that can be used to estimate the probability of a<br />

penny spread, nickel spread, dime spread, quarter spread and whole dollar spread, respectively. In order to construct<br />

the effective tick proxy for a time interval, the first step is to compute the frequency of each price cluster within that<br />

time interval. Take St as the realisation of the effective spread at the closing trade of day t and assume that St is<br />

randomly drawn from a set of possible spreads Sj (for example in $1/8 fractional price grid, S1 = $1/8 spread, S2 =<br />

$1/4 spread, S3 = $1/2 spread and S4 = $1 spread) with corresponding probabilities γj , where j=1,2,…,J and S1< S2<br />


Then, the effective tick proxy is calculated as the probability-weighted average of each effective spread size<br />

divided by the average price ( p i ) in time interval i :<br />

J<br />

�<br />

j s j<br />

j�1<br />

EffectiveTicki<br />

�<br />

pi<br />

ˆ �<br />

(4)<br />

Holden (2009) incorporates the average of the quoted spreads in no-trade days into the effective tick estimator<br />

and concludes the EFFT. EFFT for the time interval i is the probability weighted average of the effective estimator<br />

and the average of the quoted spreads from no-trade days:<br />

� 1 NTD<br />

�(<br />

1 � ˆ �)<br />

� NQSt<br />

when NTD�0,<br />

� NTD t�1<br />

�<br />

�0<br />

when NTD � 0<br />

EFFTi<br />

� ˆ � � ( EffectiveTicki<br />

) �<br />

(5)<br />

Pi<br />

where NQSt is the quoted spread computed by using reported bid and ask quoted prices on no-trade days, and �ˆ<br />

is the estimated probability of a trading day given by<br />

TD<br />

�ˆ �<br />

(6)<br />

TD � NTD<br />

where TD and NTD are the number of trading days and no-trade days over the time interval, respectively.<br />

Holden (2009) and Goyenko et al. (2009) reports high correlation between this proxy and high-frequency and lowfrequency<br />

benchmarks. They also show that this measure performs better against the high-frequency benchmarks<br />

than other available low-frequency proxies for liquidity and/or transaction costs.<br />

3 Data and Methodology<br />

3.1 Data<br />

Daily transaction data from CRSP daily file from December 31st, 1925 until December 31st, 2008 for all the<br />

common stocks (CRSP codes 10 and 11) listed in NYSE are employed to estimate the monthly EFFT. Monthly<br />

returns and other required data to compute characteristics for the stocks are downloaded from CRSP monthly file.<br />

Data for Fama and French three factors (market, size and value) are downloaded from French’s website (French,<br />

2010). To be included in the monthly cross-sectional analysis a stock must be traded at the beginning and end of the<br />

year. It also needs to have monthly data on return and market capitalisation at the start and end of the year. In<br />

addition, as suggested by Fama and French (1992), we exclude financial firms because they usually have high<br />

leverage. This screening process yields on average 3035 stocks. NYSE introduced decimal pricing regime by<br />

applying the new regime to some pilot firms from 28 August, 2000 and then completely switched to decimal grids<br />

on 29 January, 2001. We eliminate the pilot firms from our sample which started to be quoted and traded based on<br />

decimal pricing system at various times after August 28, 2000. Since estimation of EFFT is based on the tick sizes,<br />

this elimination makes the computation of EFFTs across the stocks, consistent. This filtering removes 88 pilot firms.<br />

In order to prepare data for the cross-sectional analysis, for each stock-month, EFFT and size are computed.<br />

EFFT is calculated at the end of the month using daily trade data and employing equations 1 to 6. From January<br />

1926 to January 2001, during which NYSE was using fractional price grid, price increments as small as $1/64 are<br />

used. From February 2001 to December 2008, during which NYSE has had decimal pricing system, tick sizes have<br />

been $0.01, $0.05, $0.10, $0.25 and $1. Size is the market capitalisation of the equity at the end of the month.<br />

Table 1 reports estimated statistics of time-series averages of EFFT, size and returns across all stocks (in each<br />

month) for the whole period of 1926-2008.<br />

Variable Mean Std Dev Median<br />

Return<br />

EFFT<br />

Size (in $ Billion)<br />

0.0117<br />

0.0124<br />

1.5108<br />

0.072<br />

0.007<br />

2.3508<br />

0.014<br />

0.011<br />

0.4380<br />

Table 1: Summary statistics for the individual stocks<br />

The average of monthly effective spread (EFFT) is approximately 1.24%. Size variable displays considerable<br />

skewness and its distribution is skewed to the right. Therefore, in our empirical analysis we use relative measure for<br />

122


size (the log market capitalisation relative to the median), as suggested by Hasbrouck (2009) to ensure the<br />

stationarity of the Size time-series. This relative measure is constructed at the end of the month as follows. If mjt<br />

indicates the natural logarithm of the equity market capitalisation of the stock j at the end of the month, the log<br />

relative market capitalisation is the difference between mjt and the cross-sectional median of the natural logarithm of<br />

the equity market capitalisation of all the stocks computed in the month. The log size relative to the median captures<br />

the cross-sectional variation while removing the non-stationary long-run components.<br />

3.2 Methodology<br />

We employ the procedure proposed by Fama-MacBeth (1973) and use portfolios to test whether EFFT, as a<br />

security characteristic, has incremental explanatory power for returns relative to common risk factors and after<br />

controlling for other well known stock characteristics. Portfolios are formed annually based on the information<br />

available at the start of the year: EFFT and beta estimated over the prior period. Using the data over the prior 3-4<br />

years, pre-ranking (rolling) betas of individual stocks are estimated employing market model regressions of monthly<br />

excess returns (over the one-month Treasury bill return). The market index is the CRSP equally-weighted<br />

AMEX/NYSE/Nasdaq index. At the end of a year all stocks in the sample are ranked in five equal groups by the<br />

pre-ranking beta estimates. Each of these five beta groups is then divided into another five equal subgroups by<br />

ranking stocks based on their average monthly EFFT computed over the prior year. This concludes twenty five<br />

portfolios, as asset representatives, with almost equal numbers of stocks which are rebalanced every year. Then we<br />

compute the portfolio variables (returns, EFFT, size) and unconditional betas (after ranking) as the equally weighted<br />

averages of stock variables. We applied the equally-weighted average rather than the value-weighted because as<br />

Hasbrouck (2009) points out value-weighted averages are likely to suppress variation in liquidity due to the inverse<br />

association of liquidity and size.<br />

EFFT<br />

Group<br />

Lowest<br />

2<br />

3<br />

4<br />

Highest<br />

Mean<br />

EFFT, [Beta], , (Size), Number of firms<br />

Beta Group<br />

Lowest 2 3 4 Highest Mean<br />

0.31<br />

[0.4804]<br />

<br />

(7.3334)<br />

27<br />

0.46<br />

[0.5255]<br />

<br />

(3.0663)<br />

27<br />

0.59<br />

[0.5292]<br />

<br />

(2.5494)<br />

27<br />

0.76<br />

[0.5763]<br />

<br />

(1.0996)<br />

27<br />

1.52<br />

[0.719]<br />

<br />

(0.3827)<br />

29<br />

0.728<br />

[0.5661]<br />

<br />

(2.8863)<br />

28<br />

0.34<br />

[0.6387]<br />

<br />

(7.2161)<br />

27<br />

0.51<br />

[0.6304]<br />

<br />

(2.7027)<br />

27<br />

0.67<br />

[0.7328]<br />

<br />

(1.3179)<br />

27<br />

0.89<br />

[0.7637]<br />

<br />

(0.6008)<br />

26<br />

1.85<br />

[0.8957]<br />

<br />

(0.2289)<br />

29<br />

0.85<br />

[0.7323]<br />

<br />

(2.4133)<br />

28<br />

0.38<br />

[0.7863]<br />

<br />

(4.2774)<br />

27<br />

0.56<br />

[0.7991]<br />

<br />

(1.8587)<br />

27<br />

0.75<br />

[0.8537]<br />

<br />

(0.9112)<br />

27<br />

1.02<br />

[0.9871]<br />

<br />

(0.4061)<br />

27<br />

2.29<br />

[1.0715]<br />

<br />

(0.1966)<br />

29<br />

1.00<br />

[0.8995]<br />

<br />

(1.53)<br />

28<br />

0.45<br />

[0.9187]<br />

<br />

(2.8952)<br />

27<br />

0.67<br />

[1.105]<br />

<br />

(1.1772)<br />

0.61<br />

[1.1232]<br />

<br />

(1.8086)<br />

28<br />

0.94<br />

[1.2892]<br />

<br />

(0.7556)<br />

0.42<br />

[0.7895]<br />

<br />

(4.7061)<br />

27<br />

0.63<br />

[0.8709]<br />

<br />

(1.9121)<br />

Table 2: Summary Statistics for the 25 annually re-balanced portfolios of NYSE firms<br />

Table 2 reports the summary statistics of the portfolio values over 1930-2008 (948 test-month period). Portfolio<br />

EFFTs vary between 0.31 percent to 4.55 percent, while betas change between 0.48 and 1.42, and returns range from<br />

0.76 percent to 1.72 percent. The values for beta and EFFT in higher ranked portfolios imply that low liquid stocks<br />

123<br />

27<br />

0.91<br />

[1.0143]<br />

<br />

(0.6184)<br />

27<br />

1.27<br />

[1.1664]<br />

<br />

(0.3029)<br />

27<br />

2.99<br />

[1.1945]<br />

<br />

(0.1568)<br />

29<br />

1.26<br />

[1.0809]<br />

<br />

(1.0301)<br />

28<br />

28<br />

1.3<br />

[1.3077]<br />

<br />

(0.4029)<br />

28<br />

1.9<br />

[1.338]<br />

<br />

(0.1796)<br />

28<br />

4.55<br />

[1.4242]<br />

<br />

(0.0934)<br />

30<br />

1.86<br />

[1.2965]<br />

<br />

(0.6480)<br />

28<br />

27<br />

0.84<br />

[0.8875]<br />

<br />

(1.1590)<br />

27<br />

1.17<br />

[0.9663]<br />

<br />

(0.5178)<br />

27<br />

2.64<br />

[1.0610]<br />

<br />

(0.2117)<br />

29<br />

1.14<br />

[0.9150]<br />

<br />

(1.7015)<br />

28


tend to be high beta stocks. Portfolio market capitalisation varies between $93.4 million and $9.33 billion. Within<br />

each beta group, return generally increases when EFFT increases and market capitalisation decreases. This suggests<br />

that illiquid stocks tend to have smaller market value and higher returns. Within each EFFT group, market<br />

capitalisation decreases when portfolio betas increase, while return increases. This implies that smaller stocks are<br />

riskier and associated with higher returns.<br />

3.2.1 Asset pricing tests<br />

In order to explore the relation between stock expected returns and EFFT, and based on Fama-Macbeth<br />

approach, a cross-section model is estimated across all portfolios for each month t, where monthly portfolio excess<br />

returns are a function of asset characteristics and risk factor loadings:<br />

K<br />

J<br />

rit<br />

� � 0 � � � � � � � Z � �<br />

(7)<br />

kt ki jt ji,<br />

t it<br />

k � 1 j � 1<br />

where rit is the excess returns on portfolio i in month t of the test year and Zji,t is characteristic j of portfolio i,<br />

estimated at the end of the preceding year from data in that year and known to investors at the beginning of the test<br />

year (during which they make their investment decisions). βki is the unconditional risk variables for portfolio i<br />

computed over the sample. The slopes δkt and λkt are the effects of characteristics on the expected excess returns and<br />

the risk factor premia in cross-section month t, respectively, and εit are the residuals.<br />

The overall coefficient estimates is the standard Fama-Macbeth estimator which is the arithmetic average of the<br />

time-series of the OLS estimates of coefficients in equation 7. The t-statistics are adjusted with respect to<br />

heterscedasticity and autocorrelations (HAC) by using Newy-West (1987) method with three lags. Following<br />

characteristics and risk variables are used in model 7.<br />

3.2.2 Characteristics variables<br />

3.2.2.1 Transaction cost proxy<br />

EFFT: the transaction costs proxy for portfolio i in month t of the teat year y is the average of the portfolio<br />

monthly EFFTs in the preceding year.<br />

3.2.2.2 Control variables<br />

Relative Size: to account for size, we use relative measure for size (the log market capitalisation relative to the<br />

cross-sectional median) to ensure the stationarity of the size time-series. The log relative market capitalisation for<br />

portfolio i in month t of the test year y is the log relative market capitalisation of the portfolio at the end of the<br />

preceding year.<br />

Momentum: in our empirical analysis, the momentum variable for portfolio i in month t is the portfolio<br />

monthly cumulative returns over the first half the preceding year. Novy-Marx (2008) documents that a stock’s<br />

intermediate horizon past performance, the first six months of the prior year, drives the momentum.<br />

3.2.3 Systematic risk variables<br />

The risk variables are the projection coefficients (loading factors) in the K-factor return generating process<br />

rit K<br />

� �0 � �<br />

k �<br />

� f<br />

ki kt<br />

j<br />

� eit<br />

where rit is the excess returns on portfolio i in month t of the test year and fkt is factor realisations in month t . βki<br />

is factor loadings for the respective risk factors and eit is the residuals. The (unconditional) betas of each portfolio<br />

are obtained via OLS time-series regression of equation 8 over the sample. In our asset pricing analysis we use<br />

various sets of factors. The basic set includes only excess market return as the systematic factor. So equation 8<br />

becomes a classical CAPM model. The market return is CRSP equally weighted AMEX/NYSE/Nasdaq index and<br />

the risk-free rate is one-month Treasury bill rate. The second set is the three Fama-French factors i.e. i.e. Mkt, the<br />

market portfolio excess returns, SMB which is the mimicked performance of a portfolio that is long in small firms<br />

and short in large firms, and HML which is the mimicked performance of a portfolio that is long in high book-tomarket<br />

equity firms and short in low book-to-market equity firms. The third set consists of excess market returns<br />

and the liquidity factor, making model 8 a liquidity-augmented CAPM. The forth set contains three Fama-French<br />

three factors and the liquidity factor, suggesting model 8 as a liquidity-augmented Fama-French model. The liquidity<br />

124<br />

(8)


factor, LIQ, is the mimicked performance of a portfolio that is long in low liquid firms and short in high liquid firms<br />

after controlling for their market risk beta. The construction of our liquidity factor, LIQ, is similar to the<br />

construction of SMB in Fama and Frecnh (1993). At the start of each year we rank all NYSE common stocks based<br />

on their beta computed using previous 3 to 4 years. We form three portfolios based on 30th and 70th NYSE<br />

percentiles; low beta, neutral beta and high beta. Then within each beta portfolios we sort the stocks based on their<br />

previous year monthly average EFFT and construct two portfolios: low liquid and high liquid portfolios. The<br />

breakpoint is the median value for liquidity within each beta portfolio. As a result, we have six portfolios at the start<br />

of each year. The liquidity factor, LIQ, is the monthly average return on the three (equally weighted) low-liquid<br />

portfolios minus the monthly average return on the three (equally weighted) high-liquid portfolios.<br />

4 Pricing results<br />

We conduct the cross-sectional analysis using equation 7 over four sample periods: 1- the entire sample: from<br />

January 1926 to December 2008, 2- pre-1963: from January 1926 to December 1962, 3- post-1963: from January<br />

1963 to January 2001 and 4- from January 1963 to December 2008. The reason we have analysed post-1963 data<br />

into two samples is that NYSE has introduced decimal regime in February 2001 that may affect the variation of<br />

liquidity and consequently affect our asset pricing results. Table 3 presents the regression results (employing<br />

equations 7 and 8) when we use the CAPM beta as the risk control for four periods.<br />

Specification<br />

Variable 1 2 3 4 5 6<br />

Panel A: 01/ 1926 -12/2008<br />

Intercept 0.002 (1.08) 0.0033 (1.91) 0.0074 (3.7) 0.0014 (0.74) 0.0057 (2.26) 0.005 (1.96)<br />

Beta 0.007 (2.27) 0.003 (1.03) 0.0021 (0.69) 0.0076 (2.29) 0.002 (0.67) 0.0025 (0.87)<br />

EFFT 0.1955 (1.86) 0.1151 (1.05) 0.1156 (1.21)<br />

Relative Size -0.0015 (-2.89) -0.0006 (-1.17) -0.0005 (-0.91)<br />

Momentum -0.0069 (-1.6) -0.0044 (-1.19)<br />

Panel B: 01/ 1926 -12/1962<br />

Intercept 0.0005 (0.17) 0.0017 (0.59) 0.0067 (1.51) 0.001 (0.31) 0.0062 (1.33) 0.0073 (1.55)<br />

Beta 0.0107 (1.92) 0.0081 (1.51) 0.0064 (1.17) 0.0103 (1.8) 0.0061 (1.13) 0.0044 (0.87)<br />

EFFT 0.0883 (1.24) 0.0314 (0.44) 0.0417 (0.57)<br />

Relative Size -0.0014 (-1.61) -0.0011 (-1.19) -0.0014 (-1.47)<br />

Momentum -0.0053 (-0.83) -0.0014 (-0.22)<br />

Panel C: 01/1963-01/ 2001<br />

Intercept 0.0032 (1.45) 0.0041 (1.88) 0.0066 (2.67) 0.0014 (063) 0.0039 (1.61) 0.0024 (1.01)<br />

Beta 0.0038 (1.14) -0.0013 (-0.28) -0.0003 (-0.001) 0.0064 (1.57) -0.0009 (-0.25) 0.0019 (0.45)<br />

EFFT 0.2782 (2.66) 0.306 (3.11) 0.236 (2.48)<br />

Relative Size -0.0012 (-1.91) 0.0002 (0.38) 0.0006 (1.04)<br />

Momentum -0.0089 (-1.25) -0.0127 (-2.4)<br />

Panel D: 01/1963-12/ 2008<br />

Intercept 0.0029 (1.49) 0.0043 (2.21) 0.007 (3.09) 0.0018 (0.86) 0.0047 (2.08) 0.0034 (1.53)<br />

Beta 0.0034 (1.1) -0.0018 (-0.57) -0.0011 (-0.34) 0.005 (1.33) -0.0019 (-0.59) 0.0001 (0.002)<br />

EFFT 0.3081 (1.61) 0.2687 (1.66) 0.2132 (1.41)<br />

Relative Size -0.0014 (-2.45) -0.0001 (-0.15) 0.0003 (0.62)<br />

Momentum -0.0088 (-1.4) -0.0121 (-2.68)<br />

Table 3: Liquidity estimates based on CAPM. Numbers in parentheses are t-statistics, adjusted by Newey-West method<br />

Panel A reports the cross sectional results for the entire sample. Specification 1 contains CAPM beta and the<br />

intercept. The estimated market price of risk is 0.7 percent with a t-statistic of 2.27. Specification 2 includes EFFT<br />

as the only stock variable. The point estimate of EFFT is positive and marginally significant, but it becomes<br />

statistically insignificant after controlling for size and momentum (specification 6). Nonetheless, the coefficient<br />

associated with EFFT is economically interpretable. The point estimate is 0.12 (with a t-statistic of 1.21). This<br />

estimate suggests that if (the standard deviation of) the transaction costs is one percent, the associated monthly<br />

expected premium is about 0.12 percent or 12 basis points. This value is associated with a premium equal to about<br />

1.4 percent on annual basis. This value is not large. The average EFFT over the entire sample is about 1.2 percent<br />

(table 1). This value implies that average EFFT, as the proxy for transaction costs, has had a premium equal to about<br />

1.7 percent on an annual basis throughout our sample.<br />

The point estimate of 0.12 for EFFT using more than 80 years data can be justified by discussing its equivalent<br />

turnover rate for a round-trip trader. A representative trader who makes a round-trip trade over a specific period<br />

incurs 2EFFT transaction costs (once when she buys and once when she sells). A point estimate of 0.12 for EFFT<br />

125


suggests that the expected gross returns over about 17 (2/0.12) months impound 2EFFT for this trader. 17 months<br />

holding period is equal to 70 percent turnover per year. This rate of turnover seems to be reasonable as an average<br />

for NYSE from 1926 to 2008. As Hasbrouck (2009) mentions the average annual turnover rate for NYSE has been<br />

around one in 2008 and are well under one for most of the 20 th century. The historical data section of the NYSE<br />

website reports 103 percent turnover rate for 2005 and 12 percent rate for 1960. So our result for the coefficient of<br />

EFFT is consistent with the trading stories.<br />

The premium commanded by EFFT is comparable with that of Hasbrouck’s Gibbs estimate, as reported in<br />

Hasbrouck (2009). Hasbrouck’s Gibbs estimate, as a low-frequency measure for transaction costs, is a modification<br />

of Roll’s (Roll, 1984) estimator and measures deviations of transaction prices from efficient values, i.e., effective<br />

execution costs. Our results can be compared with that of Hasbrouck (2009) since both Gibbs estimate and EFFT<br />

measures transaction costs and Hasbrouck has reported his regression results for the period of 1927-2006.<br />

Hasbrouck (2009) finds a coefficient of 0.93 for his transaction costs proxy, Gibbs estimate. However, the point<br />

estimate is marginally significant at 5 percent level (t-statistics is 2.07) only when he controls for CAPM beta and<br />

does not take into account the characteristics controls. In other specifications, there is no significant result for the<br />

regression coefficient of transaction costs. So our finding provides further support that transaction costs does not<br />

have significant explanatory power in explaining cross-sectional variation of returns. From economical point of<br />

view, Hasbrouck’s finding implies that one percent transaction costs leads to 11.2 percent annual premium which is<br />

very large. Moreover, as he explains this large magnitude suggests about a six times turnover rate per year for NYSE<br />

which is unrealistic. It seems that the premium commanded by EFFT, compared to Gibbs estimate accords more<br />

with the straightforward trading stories.<br />

Specification 3 contains relative size in lieu of EFFT, as the only stock characteristics. The coefficient of<br />

relative size is negative, as we expected, and statistically significant at 1 percent level (panel A table 3). However, it<br />

becomes insignificant when we include EFFT in specification 5 while EFFT is still insignificant. This pattern can be<br />

seen when we compare specifications 2,3 and 5 in pre-1963 and post-1963 subsamples too; the t-statistics of the<br />

coefficient associated with the relative size is attenuated in specification 5 while that of EFFT is barley altered.<br />

These findings imply that effects usually attributed to size are more precisely ascribed to transaction costs. Also,<br />

our finding shows that inclusion of momentum barely modifies the economical and statistical significance of EFFT.<br />

Now we focus on the results from subsamples. In all panels and specifications the proxy for liquidity, EFFT,<br />

has a positive sign as we expected. Interestingly, while EFFT is not significant using more than 80 years data (panel<br />

A), it is significant at the 1 percent level over the period of 1963-2001 (Panel C). This significant positive result<br />

accords with that of previous research that analyses post-1963 data. However, when we extend the sample to 2008<br />

(panel D) EFFT becomes insignificant at least at the conventional 1 and 5 percent significance levels. A likely<br />

reason is that decimalisation after 2001 has made the cross sectional variations in spreads, and so EFFTs, decreased.<br />

This can attenuate the significance level. However, the EFFT coefficient does not change considerably in each<br />

specification when we move from panel C to D. EFFT does not show significant explanatory power in pre-1963<br />

period (panel B). The point estimates (in all specifications) for EFFT over pre-1963 period are less than those<br />

reported for post-1963 periods. While the EFFT coefficient in post-1963 sample is at least 0.21 (specification 6 in<br />

panel D), its maximum value is 0.09 for pre-1963 (specification 2 in panel B). Furthermore, the p-values to reject the<br />

null hypothesis are also higher for pre-1963 when we compare the corresponding specifications in four sub-samples.<br />

This low magnitude and t-statistics suggest that liquidity (level) effect has been less prevalent statistically AND<br />

economically for pre-1963.<br />

Table 4 presents the regression results when we control for Fama-French three risk factors. Specification 1 in<br />

all panels reports only the market price of three factors and the intercept. The coefficients corresponding to each risk<br />

factors and their t-statistics varies in all panels, but generally the price of size factor (SMB) is positive and<br />

significant whereas those of market and book-to-market risk factors are insignificant and close to zero.<br />

The findings are barely modified when we move from the CAPM-based results (table 3) to the FF three factorbased<br />

findings (table 4). The sign of coefficient associated with EFFT is again positive in all specifications and for<br />

all samples. Also, EFFT point estimate is not statistically significant over the period of 1926-2008 (panel A) and<br />

pre-1963 (panel B) whereas it is significant for post-1963 (panels C and D). Furthermore, the magnitude of the<br />

coefficient associated with EFFT is virtually unaltered for the entire sample (panel A specification D) and it is about<br />

0.12.<br />

Nevertheless, for the period of 1963 to 2008 (panel D) the regression coefficient associated with EFFT is<br />

significant at 5 percent level whereas it was insignificant when we used only CAPM beta as the risk control (panel D<br />

table 3). Moreover, the point estimates for EFFT for the post-1963 analysis (Panel C and D table 4) almost double<br />

those counterpart estimates reported in table 3. Also their corresponding statistical significance is also improved<br />

when we control for FF three risk factors. On the other hand, the point estimate of EFFT is halved in pre-1963<br />

126


sample (panel B table 4) compared to that of reported in table 3 and their corresponding t-statistics are slightly<br />

attenuated. These slight changes in magnitude and significance levels of EFFT for the sub-samples of pre-1963 and<br />

post-1963 confirm our previous key findings in table 3; idiosyncratic liquidity does not have explanatory power in<br />

cross-sectional variation of stock returns before 1963, but it matters in expected returns after 1963.<br />

Specification<br />

Variable 1 2 3 4 5 6<br />

Panel A: 01/ 1926 -12/2008<br />

Intercept 0.008 (3.72) 0.0056 (2.36) 0.009 (3.5) 0.0074 (3.35) 0.0066 (2.35) 0.0056 (1.93)<br />

Mkt_beta -0.0018 (-0.77) 0.0001 (0.05) -0.0014 (-0.6) -0.0013 (-0.53) 0.0003 (0.12) 0.0005 (0.23)<br />

SMB_beta 0.006 (3.46) 0.0041 (1.69) 0.005 (2.56) 0.0058 (3.21) 0.003 (1.46) 0.0033 (1.51)<br />

HML_beta 0.0003 (0.15) 0.0011 (0.63) -0.0006 (-0.29) -0.0002 (-0.11) 0.0006 (0.33) 0.0008 (0.39)<br />

EFFT 0.0983 (0.62) 0.1263 (0.79) 0.1210 (1.13)<br />

Relative Size -0.0006 (-1.02) -0.0004 (-0.8) -0.0001 (-0.23)<br />

Momentum -0.0074 (-1.93) -0.0089 (-2.27)<br />

Panel B: 01/ 1926 -12/1962<br />

Intercept 0.0038 (1.05) 0.0015 (0.42) 0.0058 (1.19) 0.0034 (0.91) 0.0051 (1.01) 0.0057 (1.12)<br />

Mkt_beta 0.004 (0.95) 0.0058 (1.37) 0.0048 (1.13) 0.0047 (1.12) 0.0053 (1.2) 0.0045 (1.1)<br />

SMB_beta 0.0066 (2.2) 0.0051 (1.55) 0.0049 (1.68) 0.0067 (2.18) 0.0047 (1.56) 0.0046 (1.46)<br />

HML_beta 0.0014 (0.52) 0.0009 (0.3) -0.0012 (-0.46) 0.0004 (0.15) -0.0007 (-0.25) -0.001 (-0.46)<br />

EFFT 0.0746 (0.99) 0.0142 (0.16) 0.0101 (-0.12)<br />

Relative Size -0.0011 (-1.28) -0.001 (-1.14) -0.001 (-1.3)<br />

Momentum -0.0045 (-0.79) -0.0027 (-0.45)<br />

Panel C: 01/1963-01/ 2001<br />

Intercept 0.0073 (1.82) -0.006 (-1.16) 0.0083 (2.01) 0.004 (0.91) -0.0048 (-1.01) -0.0052 (-1.06)<br />

Mkt_beta -0.003 (-0.8) 0.0075 (1.7) -0.0058 (-1.39) 0.0015 (0.32) 0.0058 (1.34) 0.0074 (1.56)<br />

SMB_beta 0.0033 (1.72) -0.0061 (-2.00) 0.0068 (2.35) 0.0032 (1.52) -0.0043 (-1.17) -0.0029 (-0.85)<br />

HML_beta 0.0017 (0.65) 0.0038 (1.46) 0.0024 (0.91) 0.0004 (0.14) 0.0033 (1.21) 0.0017 (0.61)<br />

EFFT 0.5263 (3.01) 0.5358 (2.9) 0.4805 (2.78)<br />

Relative Size 0.001 (1.14) 0.0002 (0.26) 0.0004 (0.49)<br />

Momentum -0.0112 (-1.83) -0.0108 (-1.97)<br />

Panel D: 01/1963-12/ 2008<br />

Intercept 0.0057 (1.67) -0.0041 (-0.99) 0.0064 (1.82) 0.0044 (1.2) -0.0034 (-0.87) -0.0029 (-0.73)<br />

Mkt_beta -0.0028 (-0.86) 0.0054 (1.47) -0.0042 (-1.22) -0.0004 (-0.12) 0.0048 (1.34) 0.005 (1.29)<br />

SMB_beta 0.0028 (1.45) -0.0057 (-2.08) 0.005 (1.79) 0.0026 (1.29) -0.005 (-1.46) -0.0035 (-1.05)<br />

HML_beta 0.0036 (1.4) 0.0056 (2.21) 0.0032 (1.34) 0.0018 (0.76) 0.0047 (2.02) 0.0029 (1.28)<br />

EFFT 0.4639 (2.15) 0.446 (2.00) 0.4336 (2.17)<br />

Relative Size 0.0005 (0.71) 0.0001 (-0.06) 0.0003 (0.38)<br />

Momentum -0.0103 (-2.07) -0.0104 (-2.2)<br />

Table 4: Liquidity pricing estimates based on FF3 model. Numbers in parentheses are t-statistics, adjusted by Newey-West method<br />

In addition, comparing the specifications 2,3 and 5 in all panels of table 4 confirms our previous finding that the<br />

effect ascribed to the size is more precisely explained by trading costs. When both EFFT and relative size are<br />

included into the model, it is observed that the significance of relative size attenuated while that of the EFFT is<br />

slightly modified.<br />

The coefficient associated with the momentum variable is significant over the entire sample (panel A table 4),<br />

though its sign is negative which suggest a negative feedback effect in NYSE. The explanatory power of momentum<br />

variable beside Fama-French three factors is consistent with the Carhart’s (1997) finding that suggests previous<br />

returns mimics the momentum risk factor to augment Fama-French three factors. Nonetheless, inclusion of<br />

momentum barely changes the economical and statistical significance of EFFT over entire sample and all<br />

subsamples.<br />

In summary, findings from tables 3 and 4 suggests that while there is a positive and significant relation between<br />

idiosyncratic transaction costs and returns over post-1963 sample, this relation is not significant when we use the<br />

entire sample and also pre-1963 subsample. Since pre-1963 period is associated with the higher level of aggregate<br />

illiquidity than post-1963 and there is a possibility that systematic risk factors that we have controlled so far have<br />

not impounded the market level liquidity risk, we control for the market liquidity risk and re-run our asset pricing<br />

analysis. Table 5 report the expected returns specifications when we control for CAPM beta and liquidity beta,<br />

calculated based on LIQ liquidity factor. The specification results show that the general findings are the same as<br />

those we obtained in table 3. The sign of the EFFT regression coefficient is positive in all samples, as we expected.<br />

Also, the liquidity effect is statistically significant only for post-1963 and the point estimate of EFFT for the whole<br />

period (panel A) is barely changed.<br />

However, the statistical significance of EFFT for the whole period (panel A) and both its statistical and<br />

economical significance for pre-1963 (panel B) are attenuated when we control for the market liquidity risk.<br />

127


Interestingly, this is vice versa for post-1963 (panels C and D); the regression coefficients associated with EFFT are<br />

doubled and the statistical significance generally increases.<br />

Specification<br />

Variable 1 2 3 4 5 6<br />

Panel A: 01/ 1926 -12/2008<br />

Intercept 0.0048 (2.64) 0.0037 (1.89) 0.0059 (2.24) 0.0045 (2.38) 0.0048 (1.83) 0.0039 (1.44)<br />

Market Beta 0.0045 (1.54) 0.0044 (1.38) 0.0039 (1.28) 0.0043 (1.42) 0.0034 (1.11) 0.0039 (1.26)<br />

Liquidity Beta -0.0046 (-3.7) -0.0029 (-1.35) -0.0041 (-2.81) -0.0042 (-3.37) -0.0018 (-0.93) -0.0021 (-1.1)<br />

EFFT 0.0985 (0.56) 0.1113 (0.66) 0.1754 (1.04)<br />

Relative Size -0.0003 (-0.53) -0.0004 (-0.68) -0.0001 (-0.27)<br />

Momentum -0.0058 (-1.49) -0.005 (-1.28)<br />

Panel B: 01/ 1926 -12/1962<br />

Intercept 0.0026 (0.87) 0.0016 (0.54) 0.0061 (1.26) 0.003 (0.99) 0.0055 (1.17) 0.0065 (1.37)<br />

Market Beta 0.009 (1.67) 0.0097 (1.72) 0.0067 (1.24) 0.0082 (1.54) 0.0071 (1.37) 0.0055 (1.07)<br />

Liquidity Beta -0.0049 (-2.23) -0.004 (-1.3) -0.0026 (-1.13) -0.0048 (-2.1) -0.0019 (-0.72) -0.0024 (-0.89)<br />

EFFT 0.0282 (0.26) 0.0059 (0.05) 0.034 (-0.3)<br />

Relative Size -0.0011 (-1.07) -0.001 (-1.06) -0.0012 (-1.2)<br />

Momentum -0.0034 (-0.58) -0.0021 (-0.34)<br />

Panel C: 01/1963-01/ 2001<br />

Intercept 0.0063 (2.7) 0.0017 (0.61) 0.0061 (2.49) 0.0042 (1.73) 0.0012 (0.43) -0.0005 (-0.17)<br />

Market Beta 0.0011 (0.32) -0.0011 (-0.29) 0.002 (0.59) 0.0038 (0.92) -0.0014 (-0.35) 0.0015 (0.34)<br />

Liquidity Beta -0.0037 (-2.57) 0.0028 (0.94) -0.0063 (-3.24) -0.0031 (-2.22) 0.0025 (0.76) 0.0021 (0.65)<br />

EFFT 0.5327 (2.8) 0.6078 (2.88) 0.5489 (2.77)<br />

Relative Size 0.0008 (1.09) 0.0002 (0.31) 0.0005 (0.75)<br />

Momentum -0.0096 (-1.52) -0.0103 (-1.78)<br />

Panel D: 01/1963-12/ 2008<br />

Intercept 0.0062 (2.98) 0.0041 (1.77) 0.0057 (2.51) 0.0048 (2.3) 0.0038 (1.58) 0.0021 (0.84)<br />

Market Beta 0.0006 (0.19) -0.0014 (-0.43) 0.0014 (0.44) 0.0019 (0.54) -0.002 (-0.52) -0.0001 (-0.02)<br />

Liquidity Beta -0.0042 (-2.94) 0.0002 (0.1) -0.0058 (-3.28) -0.0033 (-2.53) 0.0011 (0.37) 0.0011 (0.36)<br />

EFFT 0.3075 (1.12) 0.3632 (1.36) 0.4108 (1.61)<br />

Relative Size 0.0005 (0.8) -0.0001 (-0.11) 0.0003 (0.53)<br />

Momentum -0.0101 (-2.00) -0.0114 (-2.28)<br />

Table 5: Liquidity pricing estimates based on augmented-CAPM. Numbers in parentheses are t-statistics, adjusted by Newey-West method<br />

Therefore, the evidence from the entire sample and also pre-1963 show that a part of the liquidity effect associated<br />

with the liquidity level is attributed to the market liquidity risk, though it is economically small. Nonetheless, the<br />

liquidity risk is less important for the post-1963 period.<br />

Specifications 1,3 and 4 in all panels of table 5 show that market liquidity risk, as the only variable representing<br />

liquidity effect, is statistically significant, though economically insignificant. However, when liquidity level is<br />

included into the model (specifications 2,5 and 6) the liquidity beta becomes insignificant. The liquidity level is also<br />

insignificant for the entire sample and pre-1963 (panels A and B) but significant for post-1963. This again suggests<br />

that both the liquidity level and the liquidity risk are responsible for the liquidity effect. Nevertheless, market<br />

liquidity risk is less important for post-1963.<br />

What’s more, comparing specifications 2,3 and 5 in table 5 confirms our previous finding that the effect<br />

associated with size is more attributed to the liquidity level. Furthermore, table 4 shows that Momentum is generally<br />

significant, but it does affect neither point estimates nor the significance level of the EFFT and liquidity beta.<br />

We further control for Fama-French three factor loadings besides the loading associated to the liquidity state<br />

factor, but the results barley modify our previous findings (corresponding table is not tabulated).<br />

5 Concluding remarks<br />

There is a big issue to study pricing implications of liquidity; on the one hand, we require a long time-series of<br />

data to learn about asset expected returns. On the other hand, high frequency data that are usually used to estimate<br />

liquidity are not available for a long period (they are not available prior to 1983 in US market). There are two<br />

solutions for this issue. The first is to limit the sample to the post-1983 period and use a high frequency measure to<br />

estimate the liquidity and apply it to asset pricing study with the hope that they tell us about the pricing implications<br />

of liquidity. The second approach is to use low-frequency data (that are available for much longer periods) and to<br />

employ low-frequency measures to estimate liquidity and use those estimates in asset pricing tests. This paper<br />

adopts the second approach and uses EFFT as a low-frequency proxy. EFFT is computed using daily data and<br />

intends to be a proxy for the intra-day effective spreads. This paper examines the cross-sectional association of stock<br />

128


expected returns and trading costs over 1926-2008 in NYSE by using EFFT as a proxy for liquidity. We conduct the<br />

analysis over the entire sample and pre-1963 (over which there is a lack of research in liquidity asset pricing and<br />

post-1963 (that has been covered by almost all the research in this area). Our findings from the entire sample shows<br />

that expected returns are increasing with illiquidity of the stock. This is consistent with the usual view that stocks are<br />

priced so that their gross returns include compensation for trading costs. However, this positive relation is not<br />

significant statistically at the conventional levels. Nonetheless, it is economically significant and interpretable. We<br />

find that a one percent (standard deviation) of liquidity level translates into a monthly expected premium of about<br />

0.12 percent or 12 basis points. This result is robust after taking into account various characteristics such as size and<br />

risk controls including systematic liquidity factor. Our results from the subsamples show that liquidity level has<br />

explanatory power in cross-sectional variation of stock expected returns only over the post-1963 period. This effect<br />

is both economically and statistically insignificant for pre-1963 period. Although controlling systematic liquidity<br />

risk barely changes these findings, new interesting finding emerges. Evidence from the entire sample and pre-1963<br />

suggest that market liquidity risk is marginally significant in association with cross-sectional stock expected returns,<br />

though this relation is economically insignificant.<br />

These findings suggest that liquidity statistically affect the cross-sectional variation of expected returns over the<br />

entire sample as well as the sub samples, though the channel for this effect seems to be different over the periods. It<br />

seems that over the post-1963 the premium with respect to the liquidity level is more prevalent than that of the<br />

systematic liquidity risk. This is opposite over pre-1963 period and the entire sample.<br />

6 References<br />

Amihud, Y. and H. Mendelson, 1986. Asset pricing and the Bid-Ask Spread. Journal of Financial Economics. 17:<br />

223-249.<br />

Amihud, Y., H. Mendelson, L.H. Pedersen, 2005. Liquidity and asset prices. Foundations and Trends in Finance.<br />

1,4: 269-364.<br />

Carhart, M., 1997. On persistence of mutual fund performance. Journal of Finance. 52: 57-82.<br />

French K.R., 2010. Tuck School of Business at Dartmouth, accessed 9 March 2010.<br />

http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/index.html.<br />

Fama, E.F., K.R. French, 1992. The cross section of expected stock returns. Journal of Finance. 47: 427-466.<br />

Fama, E.F., K.R. French, 1993. Common risk factors in the returns on stocks and bonds. Journal of Financial<br />

Economics: 33: 3–56.<br />

Fama E., J. MacBeth, 1973. Risk, Return, and Equilibrium: Empirical Test. Journal of Political Economy: 81 (3)<br />

607-636.<br />

Gottesman, A. and G. Jacoby, 2005. Payout policy, taxes, and the relation between returns and the bid-ask spread.<br />

Journal of Banking and Finance. 30: 37-58.<br />

Goyenko, R. Y., C. W. Holden, C.A. Trzcinka, 2009. Do liquidity measures measure liquidity? Journal of Financial<br />

Economics. 92: 153–181.<br />

Hasbrouck, J. (2009). “Trading Costs and Returns for U.S. Equities: Estimating Effective Costs from Daily Data.”<br />

The Journal of Finance LXIV (3): 1445-1477.<br />

Holden, C. W. (2009). "New low-frequency spread measures." Journal of Financial Markets 12: 778–813.<br />

Korajczyk, R. A. and R. Sadka (2008). “Pricing the commonality across alternative measures of liquidity.” Journal<br />

of Financial Economics 87(1): 45–72.<br />

Newey, W., West, K., 1987. A simple, positive-definite, heteroscedasticity and autocorrelation consistent covariance<br />

matrix. Econometrica 55, 703–708.<br />

Novy-Marx, R. (2008). “Momentum is not Momentum.” Working paper, University of Chicago.<br />

129


BANKING<br />

130


131


ELECTRONIC BANKING IN JORDAN: A FRAMEWORK OF ADOPTION<br />

Muneer Abbad, Prince Mohammad Bin Fahd University, Saudi Arabia<br />

Juma’h Abbad, Al Al Bayte University, Jordan<br />

Faten Jaber, Oxford Brookes University, UK<br />

Abstract: Why some customers use e-banking systems whereas others do not is the problem that motivated this study. This study<br />

examines the factors of customers’ technology adoption based on the TAM. E-banking adoption is studied from the information<br />

systems acceptance point of view referring to idea that customers must use the information system to make bank transactions and<br />

hence more knowledge on the factors that affect IT adoption is needed in order to better understand and facilitate the acceptance.<br />

Perceived ease of use, perceived usefulness, subjective norm, security and trust, Internet experience, and enjoyment are the important<br />

factors that affect customers’ adoption of e-banking in Jordan.<br />

Keywords: Electronic banking, Structural Equation Modeling, Technology Acceptance Model<br />

1- Introduction<br />

The world is changing at a staggering rate and information technology (IT) is considered to be the key driver for<br />

these changes. Based on numbers published by Internet World Stats (2008), there are approximately 1.5 billion<br />

Internet users around the world and approximately 42 million Internet users in the Middle East. Internet and<br />

advances in information and communication technologies (ICT) have had a profound effect on the banking industry.<br />

Porter and Millar (1985) have found that banking is one of the most information-intensive sectors. Many of the<br />

current studies in the banking industry have been conducted in developed countries. Hence, the current study<br />

attempts to contribute by focusing on the banking industry of a developing economy, Jordan.<br />

2- E-banking Definitions and Benefits<br />

E-banking is “an umbrella term for the process by which a customer may perform banking transactions<br />

electronically without visiting a brick-and-mortar institution. The following terms all refer to one form or another of<br />

e-banking: personal computer (PC) banking, Internet banking, virtual banking, online banking, home banking,<br />

remote electronic banking, and phone banking. PC banking and Internet or online banking is the most frequently<br />

used designation” (www.bankersonline.com). E-banking technology could represent a variety of different services,<br />

ranging from common automatic teller machine (ATM) services, direct deposit to automatic bill payment (ABP),<br />

electronic transfer of funds (EFT), and computer banking (PC banking) (Kolodinsky et al., 2001). Gopalakrishnan et<br />

al. (2003) defined e-banking as “the use of the Internet as a remote delivery channel for banking services”. In this<br />

research, e-banking is defined as “Using Internet and websites for doing transactions without physical presence of<br />

customers on the physical environment of a bank”.<br />

The main benefits of e-banking for customers are: speed, convenient, available, accessible, time saving, and cost<br />

saving. The main benefits of e-banking for banks are: more efficient, reducing costs, eliminating location<br />

constraints, expansion of reach, and dependence on information technology. However, these benefits of e-banking<br />

cannot be achieved unless customers use the e-banking services.<br />

3- E-banking in Jordan<br />

The banking sector is very dynamic and liberal in Jordan. It comprises 23 banks, eight of which are subsidiaries of<br />

foreign banks and two are Islamic banks (CBJ, 2008). A new banking law, which aims at improving the industry's<br />

efficiency, came into force in 2000, the new law protects depositors’ interests, diminishes money market risk, guards<br />

against the concentration of lending and includes articles on new banking practices (e-commerce, e-banking) and<br />

money laundering. Some banks have started adopting modern banking practices such as automated check clearing<br />

and the use of magnetic check processors, unified reporting forms and electronic data-transmission networks. Most<br />

of the banks in Jordan have started e-banking services. Table 1 summarizes the names of the main e-banking<br />

services that provided by Jordanian banks (Source: Banks’ Websites, 2008).<br />

132


4- Theoretical Background<br />

Four main research models have been developed in the area of information technology acceptance studies. The first<br />

model was the theory of reasoned action (TRA) (Fishbein and Ajzen, 1975) and its extended model, the theory of<br />

planned behavior (TPB) (Ajzen, 1985) which viewed a person’s intention to perform or not to perform a behavior as<br />

the immediate determinant of the action. The second model was adapted theoretically from TRA is TAM (Davis,<br />

1986) that developed and validated scales for two main variables, perceived usefulness and perceived ease of use.<br />

The two variables have been hypothesized to be fundamental factors of user acceptance of information technology<br />

(see Figure 1). The third model was a theoretical extension of TAM, known as TAM2, (Venkatesh and Davis, 2000).<br />

This model explained perceived usefulness and usage intentions in terms of social influence and cognitive<br />

instrumental process. The fourth model was the innovation diffusion theory (IDT) perspective (Rogers, 1995). This<br />

decision process theory proposed that there exist five distinct stages to the process of diffusion. These are<br />

knowledge, persuasion, decision, implementation and confirmation. These models addressed all direct or indirect<br />

influences in actual usage behavior.<br />

In this study, an extended version of the technology acceptance model (TAM) will be developed to investigate the<br />

factors that affect customers’ adoption of e-banking in Jordan.<br />

5- Methodology<br />

This study will use a theoretical model to examine and predict Jordanian customers’ acceptance of the e-banking<br />

system. Technology Acceptance Model (TAM) was used to construct the proposed model of customers’ acceptance<br />

of e-banking. The first step is to construct the proposed model. Then, the data will be collected and analyzed in the<br />

second step. In this research, the first step will be completed and to be continuo in the future. A majority of studies<br />

using TAM have relied on survey methodology. The survey method will be similar to that used in previous TAM<br />

studies, to maintain continuity in the research regarding perceived usefulness and perceived ease of use. The<br />

addition of new constructs will require supplementary questions that follow previous work in the field. In the future,<br />

data will be gathered through the use of questionnaire-style survey given to banks’ customers in Jordan.<br />

A survey will be used to test the hypotheses regarding the structure of the proposed model. To analyze the data, the<br />

structural equation modeling (SEM) techniques will be applied. According to SEM techniques, t-test of significance<br />

will be conducted to test the hypothesized relationships of factors in the proposed model. Then, path model will be<br />

utilized to analyze relationships between the factors in the research model to explain customers’ adoption of ebanking.<br />

6- Literature Review: Factors Affecting Customers’ Adoption of E-Banking<br />

6.1 Subjective Norm<br />

Subjective norm defined as “the person’s perception that most people who were important to him think he should or<br />

should not perform the behavior in question” (Fishbein and Ajzen, 1975). Davis (1986) omitted subjective norm in<br />

the original TAM. Several studies found no significant effect of subjective norm on behavioral intention. For<br />

example, Mathieson (1991) found no significant effect of the subjective norm on intention to use. On the contrary,<br />

some studies found significant effect of subjective norm. For example, Venkatesh and Davis’ (2000) study found<br />

that the effect of the subjective norm had a direct effect on the intention to use but the subjective norm did not have<br />

a significant effect on intention to use after a three month post implementation of the information technology in a<br />

voluntary setting. Taylor and Todd (1995) did find a significant direct effect of subjective norm on intention to use<br />

with potential information technology users. Thus, subjective norm contributed to the explanation of intention to use<br />

information technology with subjective. Accordingly, SN was hypothesized to affect the customers’ intention to use<br />

e-banking:<br />

H1: Subjective norm has a significant effect on intention to use e-banking.<br />

The relationship between SN and PU was supported by TAM2 as a significant relationship (Venkatesh and Davis<br />

2000). Research on SN concluded that this construct was the weakest among the TPB set of constructs (Hu and<br />

Chau, 1999). Even though Davis (1989) concluded that SN is not a determinant of behavioral intention (BI), other<br />

studies established a significant relationship between SN and both PU (Venkatesh and Davis, 2000) and BI (Ajzen,<br />

133


1991; Riemenschneider et al., 2003; Taylor and Todd, 1995). Thus this relationship has been proposed, supported<br />

and tested in more than one model (TAM, TRA, TPB, and DTPB). Accordingly, SN was hypothesized to affect the<br />

PU:<br />

H2: Subjective norm has a significant effect on perceived usefulness.<br />

6.2 Security and Trust<br />

Security has been widely recognized as one of the main obstacles to the adoption of e-banking and it is considered<br />

an important aspect in the debate over challenges facing e-banking. Some studies (e.g. Khalfan et al., 2006; Al-<br />

Sabbagh and Molla, 2004) conducted in the Omani banking industry, reported that security concerns have been one<br />

of the major issues in the e-banking adoption. Mattila and Mattila (2005) also claimed that security has been widely<br />

recognized as one of the main barriers to the adoption of Internet innovation.<br />

Trust is construct that is closely related in the literature. The work of Pavlou (2003) supported the direct effect<br />

of trust on intention. Other work emphasized the role of trust in predicting usefulness, intention and usage (Suh and<br />

Han 2003; Gefen et al., 2000). Also, Warkentin et al. (2002) concluded that trust can influence intention.Trust is<br />

“The willingness of a party to be vulnerable to the action of another party based on the expectation that the other<br />

will perform a particular action important to the trustor irrespective of the ability to monitor or control that other<br />

party” (Mayer et al., 1995). In other words, it implies that something of importance could potentially be lost as a<br />

result of engaging in the trusting relationship.<br />

In e-banking context, the customer willing to be dependent upon the Internet based on the expectation that it<br />

will perform what the customers expect it to do. Because e-banking is highly uncertain (Clarke, 1997), it is expected<br />

that trust is one of the main factors influencing the acceptance of e-banking. Gefen (2003), Palvou (2003) and others<br />

extended the TAM model to include trust in online based business setting. Trust was hypothesized to be an<br />

antecedent of PU and PEOU. Researchers (e.g. Gefen, 2003; Newell et al., 1998) have empirically verified that<br />

customer trust has an impact on store loyalty. According to the literature and TAM model, it is expected that<br />

security and trust will have a significant effect on behavioral intention through PU and PEOU:<br />

H3: Security and trust have a significant effect on perceived usefulness.<br />

H4: Security and trust have a significant effect on perceived ease of use.<br />

6.3 Internet Experience<br />

Experience of users was a significant factor in many studies in the technology acceptance research. Additionally, it<br />

has been utilized in several studies as a predictor or moderator in the technology acceptance research (e.g. Speier<br />

and Venkatesh, 2002; Venkatesh and Davis, 2000; Igbaria et al., 1995; Harrison and Rainer, 1992).<br />

Igbaria et al. (1995) and Zmud (1979) stated that prior experiences has long been regarded a factor identifying<br />

individual differences in technology acceptance research. Prior experience, such as computer or Internet use<br />

experience was supported in several studies as strongly influencing intention to use of a specific system through<br />

perceived usefulness and perceived ease of use (Chau, 1996; Agarwal and Prasad, 1999). This was because as<br />

people get more experiences about the system and learn necessary skills, they are likely to develop more favorable<br />

perception of its ease of use (Hackbarth et al., 2003). People tend to adopt information systems are that compatible<br />

to those previously adopted and used (Dearing and Meyer 1994). This is consistent with the study of Eastin (2002),<br />

where the experience in telephone usage positively influenced the adoption of on-line shopping, banking and<br />

investing. Similarly, in the study of Liao and Cheung (2001) about Internet-based e-shopping behavior in Singapore,<br />

IT education and Internet experience were significant antecedents in the development of people’s intention to eshopping.<br />

Anandarajan et al. (2000) found that perceived usefulness was related to time spent on the Internet. Ease<br />

of use correlated positively with use of the Internet for business activity.<br />

Based on evidence in prior researches, customers’ experience with Internet was conceptualized as an external<br />

variable. In addition, to explain user beliefs concerning usefulness and ease of use toward e-banking, prior<br />

experience on the Internet has to be considered.<br />

H5: Internet experience has a significant effect on perceived usefulness.<br />

H6: Internet experience has a significant effect on perceived ease of use.<br />

134


6.4 Enjoyment<br />

Enjoyment refers to the extent to which the activity of using a computer is perceived to be enjoyable in its own right<br />

(Davis et al., 1992). This is contrasting to the PU, which can be seen as an extrinsic motivation whereas perceived<br />

enjoyment (PE) as an intrinsic motivation to use information systems.<br />

Davis et al. (1992) findings suggest that increasing the enjoyability of a system would enhance the acceptability<br />

of IT. Later, other studies have proved this relationship in different contexts like: the Internet Usage (Teo et al.,<br />

1999; Agarwal et al., 2000; Moom and Kim, 2001), the use of the personal computer (Igbaria et al., 1997),<br />

enterprise applications (Venkatesh and Davis, 1994) and a high school e-learning system (Lee, 2006). In e-banking<br />

context, Pikkarainen et al. (2004) added perceived enjoyment, information on online banking, security and privacy<br />

and quality of Internet connection to the consumers’ acceptance of e-banking model. Yi and Hwang (2003) again<br />

studied self-efficacy, enjoyment, and learning goal orientation in the context of TAM with university students. Selfefficacy<br />

appeared to directly influence the use, whereas enjoyment and learning goal orientation mediated through<br />

self-efficacy, usefulness and ease of use. Usefulness and ease of use in turn influenced the decision to use through<br />

behavioral intention. Baierova et al. (2003) stated that people using the web for entertainment valued enjoyability.<br />

The enjoyability of Internet as a channel to make transactions online and based upon the preceding research and<br />

TAM model, the following hypothesis are proposed:<br />

H7: Enjoyment has a significant effect on perceived usefulness.<br />

H8: Enjoyment has a significant effect on perceived ease of use.<br />

6.5 Behavioral Intention, Perceived Ease of Use, and Perceived Usefulness<br />

In Davis et al’s (1989) study, the TAM model and TRA model were compared, added behavioral intention in their<br />

model, and found a strong relationship between perceived usefulness (PU) and intention (perceived usefulness<br />

accounted for 57% of behavioral intention’s variance). According to the technology acceptance model, the theory of<br />

reasoned action, and the theory of planned behavior, behavioral intention was the most appropriate predictor of<br />

actual use (Ajzen, 1985; Ajzen and Fishbein, 1980; Davis et al., 1989; Fishbein and Ajzen, 1975). Perceived ease of<br />

use and perceived usefulness were fundamental determinants of user acceptance in the technology acceptance model<br />

(Davis, 1989). Computer with perceived ease of use, perceived usefulness has been a strong determinant of usage<br />

intentions. The understanding of determinant factors of perceived usefulness as well as perceived ease of use was<br />

important. Venkatesh and Davis (2000) extended the technology acceptance model, including key determinants of<br />

the technology acceptance model’s perceived usefulness, to understand how the effects of these determinants were<br />

changing with information technology. They found subjective norm, voluntariness, image, job relevance, output<br />

quality, and result demonstrability significantly influenced a user’s technology adoption.<br />

Based on the studies reviewed above, the proposed study model tested proposed hypotheses of modified technology<br />

acceptance model with e-banking domain:<br />

H9: Perceived usefulness has a significant effect on intention to use e-banking.<br />

H10: Perceived ease of use has a significant effect on intention to use e-banking.<br />

H11: Perceived ease of use has a significant effect on perceived usefulness.<br />

The proposed model is shown in Figure 2 with arrows representing direction of relationships. The structural model<br />

is shown in Figure 3.<br />

7- Conclusions and Discussions<br />

The primary objective of the study was to study customers’ acceptance of e-banking in the context of the technology<br />

acceptance model (TAM) added with new variables derived from e-banking acceptance literature on one hand and<br />

from an interviews with e-banking specialists on the other. Because of that, this research has extended the TAM in a<br />

previously unexplored direction. The proposed model showed that, as hypothesized, customers’ intent to use the ebanking<br />

is affected by perceived ease of use, perceived usefulness, subjective norm, Internet experience, security<br />

and trust, and enjoyment.<br />

135


The empirical examination of the adoption of e-banking using a structural model based on the extension of<br />

TAM will be tested and validated in the second step. This research will provide further evidence of the<br />

appropriateness of applying TAM to measure the acceptance of e-banking. The proposed model identified the major<br />

factors affecting customers’ adoption of e-banking. These factors were perceived ease of use, perceived usefulness,<br />

subjective norm, Internet experience, security and trust, and enjoyment.<br />

8- Implications and Contributions<br />

The study makes significant contribution across all area of IT adoption and usage research and practice. A summary<br />

of the main contribution are:<br />

1. The development of a conceptual model that explains and predicts the factors that influence the adoption of<br />

information technology in general and e-banking in specific.<br />

2. The empirical support for proposed hypotheses based on the integrative research framework and literature.<br />

3. The proposed model can be used to predict successive usage behavior of customers of e-banking systems.<br />

4. The proposed model extends the domain of the TAM to customers’ adoption of e-banking.<br />

5. The proposed model expanded the technology acceptance model by including subjective norm, Internet<br />

experience, security and trust, and enjoyment as external factors.<br />

6. The results of this research provide managers with information about the planning of e-banking Websites<br />

and service selection in future.<br />

9- Limitations and Future Research<br />

The findings reported in this research may be limited to some issues. For example, e-banking (using Internet) is only<br />

one of the various types or forms of online banking products available. Thus, the relationships found in this research<br />

may or may not differ according to whether consumers are using telephone banking, electronic money transfer,<br />

direct bill payment or other forms. The results may generalize to e-banking (using Internet) as a service. In order to<br />

prove the proposed model explanatory ability, the present model has yet to be tested on different technologies in the<br />

banking industry (for instance, mobile banking) or technological innovations in general (for instance, video on<br />

mobile phones).<br />

Behavioral intention is the closest construct that can be used as a surrogate for e-banking usage. Using<br />

behavioral intention is rich, but does not replace exploring actual usage of a system. In this study, a self-reported<br />

measure provided by respondents was used to measure their intention to use e-banking, which might not be the best<br />

measure. A more accurate measure would be to explore users’ usage in a longitudinal study and control for actual<br />

usage of the system. It is suggested that future studies should focus on refining the proposed variables to include<br />

other factors (e.g. cost and quality of Internet connection, information on banks’ websites) with large samples and<br />

investigate different types of technology innovations. This research provides the foundation for additional research<br />

in the developing countries related to the technology acceptance domain such as adoption of e-government, ecommerce,<br />

electronic customer relationship management (e-CRM), and e-learning.<br />

This research used the TAM model as baseline model to find the main factors that affect customers’ adoption of<br />

e-banking; other models could be used as baseline model such as TRA, IDT, TPB, and TAM2. Finally, banks<br />

employees’ adoptions of e-banking, needs to be investigated since the employees are the other users of the system.<br />

10- References<br />

Agarwal, R. and Prasad, J. (1999) ‘Are individual differences germane to the acceptance of new information<br />

technologies?’ Decision Sciences 30, (2) 361-91.<br />

Agarwal, R., Sambamurthy, V. and Stair, R. (2000) ‘Research report: the evolving relationship between general and<br />

specific computer self-efficacy: an empirical assessment.’ Information Systems Research 11, (4) 418-430.<br />

Ajzen, I. (1985) From intention to actions: A theory of planned behavior. New York: Springer Verlag.<br />

136


Ajzen, I. (1991) ‘The theory of planned behavior, Organization Behavior and Human.’ Decision Process 50, (1) 179-<br />

211.<br />

Ajzen, I. and Fishbein, M. (1980) Understanding attitudes and predicting social behavior. Prentice-Hall, Englewood<br />

Cliffs, NJ.<br />

Al-Sabbagh, I. and Molla, A. (2004) ‘Adoption and Use of Internet Banking in the Sultanate of Oman: An<br />

Exploratory Study.’ Journal of Internet Banking and Commerce 9, (2) 1-7.<br />

Alwin, D. and Hauser, R. (1975) ‘Decomposition of effects in path analysis.’ American Sociological Review 40, (1)<br />

37-47.<br />

Anandarajan, M., Simmers, C. and Igbaria, M. (2000) ‘An exploratory investigation of the antecedents and impact<br />

of Internet usage: An individual perspective.’ Behavior & Information Technology 19, (1) 69-85.<br />

Baierova, P., Tate, M. and Hope, B. (2003) ‘The impact of purpose for Web use on user preferences for Web design<br />

features.’ 7th Pacific Asia Conference on Information Systems, 10-13.<br />

Bollen, P. (1998) Structural equations with latent variables. New York: Wiley.<br />

Byrne, K. (1998) Structural equations modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications,<br />

and programming. Lawrence Erlbaum Associates, Mahwah, NJ.<br />

CBJ (2008) Central Bank of Jordan. [online] available from [20<br />

December 2008].<br />

Chau, P. (1996) ‘An empirical assessment of a modified technology acceptance model.’ Journal of Management<br />

Information Systems 13, (2) 185–204.<br />

Clarke, A. (1997) ‘The Structural and Spatial Impacts of Bank Mergers.’ Abstracts of the Association of American<br />

Geographers. Ft. Worth, TX, 44-45.<br />

Cohen, J. (1988) Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrance Erlbaum Associates.<br />

Davis, F. (1986) A technology acceptance model for empirically testing new end-user information systems: Theory<br />

and results. (Doctoral dissertation, Sloan School of Management, Massachusetts Institute of Technology).<br />

Davis, F., Bagozzi, R. and Warshaw, P. (1992) ‘Extrinsic and intrinsic motivation to use computers in the<br />

workplace.’ Journal of Applied social Psychology 22, (14) 1111-1122.<br />

Davis, F. (1989) ‘Perceived usefulness, perceived ease of use, and user acceptance of information technology.’ MIS<br />

Quarterly 13, (3) 319-340.<br />

Davis, F., Bagozzi, R. and Warshaw, P. (1989) ‘User acceptance of computer technology: a comparison of two<br />

theoretical models.’ Management Science 35, (8) 982-1003.<br />

Dearing, J. and Meyer, G. (1994) ‘An Exploratory Tool for Predicting Adoption Decisions.’ Science<br />

Communication 16, (1) 43-57.<br />

Eastin, M. (2002) ‘Diffusion of e-commerce: An analysis of the adoption of four e-commerce activities.’ Telematics<br />

and Informatics 19, 251-267.<br />

Fishbein, M. and Ajzen, I. (1975) Belief, attitude, intention and behavior: An introduction to theory and research.<br />

Reading, MA: Addison-Wesley.<br />

Fornell, C. and Larcker, D. (1981) ‘Evaluating structural equation models with unobservable variables and<br />

measurement error.’ Journal of Marketing Research 18, (1) 39-50.<br />

Gefen D., Straub D. and Boudreau, M. (2000) ‘Structural equation modeling and regression: Guidelines for research<br />

practice.’ Communications of the Association for Information Systems 4, (7).<br />

Gefen, D. (2003) ‘TAM or just plain habit: A look at experienced online shoppers.’ Journal of End User<br />

Computing15, (3) 1-13.<br />

Hackbarth, G., Grover, V. and Yi, M. (2003) ‘Computer playfulness and anxiety: Positive and negative mediators of<br />

the system experience effect on perceived ease of use.’ Information and Management 40, (3) 221-232.<br />

Hair, J., Black, B., Babin, B., Anderson, R. and Tatham, R. (2006) Multivariate data analysis. 6th ed, Englewood<br />

Cliffs: Prentice Hall.<br />

Harrison, A. and Rainer, R. (1992) ‘The influence of individual differences on skill in end-user computing.’ Journal<br />

of Management Information Systems 9, (1) 93-111.<br />

137


Hu, P. and Cahu, P. (1999) ‘Physician acceptance of telemedicine technology: an empirical investigation.’ Topics in<br />

Health Information Management 19, (4) 20-35.<br />

Hu, P., Chau, P., Sheng, O. and Tam, K. (1999) ‘Examining the technology acceptance model using physician<br />

acceptance of telemedicine technology.’ Journal of Management Information Systems 16, (2) 91-112.<br />

Igbaria, M., Gamers, T. and Davis, G. (1995) ‘Testing the determinants of micro-computer usage via a structural<br />

equation model.’ Journal of Management Information Systems 11, (4) 87-114.<br />

Igbaria, M., Zinatelli, P., Cragg, N. and Cavaye, L. (1997) ‘Personal computing acceptance factors in small firms: a<br />

structure equation model.’ MIS Quarterly 21, (2), 279–305.<br />

Internet World Stats (2008) Usage and population statistics. [online] available from<br />

[10 August 2008].<br />

Khalfan, A., AlRefaei, Y. and Al-Hajry, M. (2006) ‘Factors influencing the adoption of Internet banking in Oman: a<br />

descriptive case study analysis.’ Int. J. Financial Services Management 1, (2) 155-172<br />

Kline, R. (1998) Principles and practice of structural equation modeling. New York, NY: The Guilford Press.<br />

Kolodinsky, J. and Hogarth, J. (2001) ‘The adoption of electronic banking technologies by American consumers.’<br />

Consumer Interests Annual 47, (1) 1-9.<br />

Lee, Y. (2006) ‘An empirical investigation into factors influencing the adoption of an e-learning system.’ Online<br />

Information Review 30, (5) 517-541.<br />

Liao, Z. and Cheung, Z. (2001) ‘Internet-based e-shopping and consumer attitudes: an empirical study.’ Information<br />

and Management 38, (5) 299–306.<br />

Mathieson, K. (1991) ‘Predicting user intentions: comparing the Technology Acceptance Model with the theory of<br />

planned behavior.’ Information Systems Research 2, (3) 173-191.<br />

Mattila, A. and Mattila, M. (2005) ‘How perceived security appears in the commercialization of internet banking.’<br />

International Journal of Financial Services Management 8, (3) 206-217.<br />

Mayer, R., Davis, J. and Schoorman, F. (1995) ‘An integrative model of organizational trust.’ Academy of<br />

Management Review 20, (3) 709-734.<br />

Moon, J. and Kim, Y. (2001) ‘Extending the TAM for a World Wide Web Context.’ Information and Management<br />

38, (4) 217-231.<br />

Newell, Stephen J., Goldsmith, Ronald E. and Banzhaf, J. (1998) ‘The Effect of Misleading Environmental Claims<br />

on Consumer Perceptions of Advertisements.’ Journal of Marketing Theory and Practice 6, (2) 48-60.<br />

Pavlou, P. (2003) ‘Consumer acceptance of electronic commerce: integrating trust and risk in the technology<br />

acceptance model.’ International Journal of Electronic Commerce 7, (3) 101-134.<br />

Pikkarainen, T., Pikkarainen, K., Karjaluoto, H. and Pahnila, S. (2004) ‘Consumer acceptance of online-banking: an<br />

extension of the technology acceptance model.’ Internet Research 14, (3) 224–235.<br />

Porter, M. and Millar, V. (1985) ‘How information gives you competitive advantage.’ Harvard Business Review 63,<br />

(4) 149-160.<br />

Riemenshneider, C., Harrison, C., Harrison, D. and Mykytyn, P. (2003) ‘Understanding IT adoption in small<br />

business: Integrating current theories.’ Information and Management 40, (1) 269-285.<br />

Rogers, E. (1995) Diffusion of innovations.4th ed, New York, Free Press.<br />

Ross, D. (1975) ‘Direct indirect and spurious effects: comment on casual analysis of inters organizational relations.’<br />

Administrative Science Quarterly 20, (1) 295-307.<br />

Segars, A. (1997) ‘Assessing the un-dimensionality of measurement: A paradigm and illustration within the context<br />

of information systems research.’ Omega 25, (1) 107-121.<br />

Sekaran, U. (2003) Research Methods for Business: A Skill Building Approach. USA: John and Wiley Sons.<br />

Speier, C. and Venkatesh V. (2002) ‘The hidden minefields in the adoption of sales force automation technologies.’<br />

Journal of Marketing 65, (1) 98-111.<br />

Suh, B. and Han, I. (2003) ‘The impact of customer trust and perception of security control on the acceptance of<br />

electronic commerce.’ International Journal of Electronic Commerce 7, (3) 135-161.<br />

138


Tabachnick , B. and Fidell, L. (2001) Using multivariate statistics. 4th ed, London: Allyn and Bacon.<br />

Taylor, S. and Todd, P. (1995) ‘Understanding information technology usage: a test of competing models.’<br />

Information Systems Research 6, (2) 144-176.<br />

Teo, T., Lim, V. and Lai, R. (1999) ‘Intrinsic and extrinsic motivation in Internet usage.’ Omega International<br />

Journal of Management 27, (1) 25-37.<br />

Venkatesh, V. and Davis, F. (2000) ‘A theoretical extension of the technology acceptance model: four longitudinal<br />

field studies.’ Management Science 46, (2) 186-204.<br />

Venkatesh, V. and Davis, F. (1994) ‘Modeling the Determinants of Perceived Ease of Use.’ Proceedings of the<br />

International Conference on Information Systems, Vancouver, BC, 213-228.<br />

Warkentin, M. Gefen, D., Pavlou, P. and Rose, G. (2002) ‘Encouraging citizen adoption of e-government by<br />

building trust.’ Electronic Markets 12, (3) 157-162.<br />

Yi, M. and Hwang, Y. (2003) ‘Predicting the use of web-based information systems: self-efficacy, enjoyment,<br />

learning goal orientation, and the technology acceptance model.’ Int. J. Human-Computer Studies 59, 431-449.<br />

Zmud, R. (1979) ‘Individual Differences and MIS Success: A Review of the Empirical Literature.’ Management<br />

Science 25, (10) 966-979.<br />

Subjective<br />

Norms<br />

Security<br />

and Trust<br />

Internet<br />

Experiences<br />

Enjoyment<br />

Figure 1: Technology Acceptance Model (TAM)<br />

Perceived<br />

Usefulnes<br />

s<br />

Perceived<br />

Ease of<br />

Use<br />

139<br />

Behavioral<br />

Intention


e12<br />

e13<br />

e14<br />

e16<br />

e15<br />

e19<br />

e17<br />

e18<br />

e22<br />

e20<br />

e21<br />

SN1<br />

SN2<br />

SN3<br />

IE2<br />

IE3<br />

ST1<br />

ST2<br />

ST3<br />

ENJ2<br />

ENJ3<br />

ENJ6<br />

SN<br />

IE<br />

ST<br />

ENJ<br />

Figure 2: The Proposed Model<br />

Structural Proposed Model<br />

e5<br />

PU1<br />

e6<br />

PU2<br />

PU<br />

PEU<br />

e7<br />

PU3<br />

e1<br />

IU1<br />

PEU1 PEU4 PEU5 PEU6<br />

e8<br />

e9<br />

Figure 3: Structural Proposed Model<br />

e10<br />

e11<br />

Table 1: E-banking Services Provided by Jordanian Banks<br />

Bank Name of E-banking Services (December 2008)<br />

Arab bank Arabi online, Hala Arabi (Phone banking services), and SMS banking<br />

(Notification services)<br />

The Housing Bank for Trade and SMS, e-banking, and shopping via the Internet (UBU)<br />

Finance<br />

Arab Jordan Investment Bank Online banking / Jordan (for customers and employees)<br />

HSBC Internet banking (Personal and Business)<br />

Bank of Jordan The phone bank, the Internet banking, the mobile bank, and SMS<br />

Ahli Bank Online banking<br />

Islamic International Arab Bank Phone banking services, Internet shopping and services, SMS, and VISA<br />

electronic card services<br />

Jordan Kuwait Bank SMS, Internet banking, phone banking, cyber branch, “Western Union”<br />

Money transfer, Zain pre-paid Mobile<br />

Union Bank Union online, SMS services, and money gram transfers<br />

JIF Bank Internet banking, phone banking, and mobile banking<br />

Jordan Commercial Bank SMS, phone banking, and e-statement<br />

Egyptian Arab land Bank Not available<br />

Cairo Amman Bank E-channels (phone banking, Internet banking, and SMS banking)<br />

National bank of Kuwait (NBK) Online banking service (Watani online)<br />

Standard Chartered Bank E-Banking<br />

ABC Bank Phone banking, Mobile banking, ABC online, and ABC SMS banking<br />

Capital Bank of Jordan Bank E, e-banking, and SMS-banking<br />

Societe Generale De Banque-<br />

Jordanian<br />

Internet banking<br />

Rafidain Bank Not available<br />

140<br />

e2<br />

IU2<br />

IU<br />

e3<br />

IU3<br />

e4<br />

IU4


Audi Bank Dial Audi, Audi online, and mobile phone<br />

BLOM Bank eBlom (Internet banking) and AlloBLOM phone<br />

141


PROCYCLICALITY OF BANKS’ CAPITAL BUFFER IN ASEAN COUNTRIES<br />

Elis Deriantino, Bank Indonesia<br />

Email: elis_deriantino@bi.go.id<br />

Abstract. By developing two models to estimate the effect of business cycle on bank’s capital buffer and the effect of Capital buffer<br />

on bank’s loan supply on annual panel data (1997-2009) of 63 commercial banks in ASEAN countries, we find strong evidence of<br />

procyclicality pattern of capital buffer among banks in ASEAN countries. Nevertheless, this procyclicality effect is somewhat small,<br />

given that a decrease of 1 percentage point in GDP growth will reduce loan growth by around 0.4 percentage points due to the rise of<br />

capital buffer. As Basel Committee for Banking Supervision (2010) proposes a new capital requirement regime to address issue of<br />

procyclicality of capital requirement, this empirical finding may become input for country’s bank regulator in determining optimal<br />

capital buffer level in such a way that it will effective to prevent volume of credit from being excessive during the upturn sides of<br />

business cycle while provide banks greater resilience that enable them to continue reasonable lending activities during the downturn<br />

sides of business cycle.<br />

Keywords: Capital buffer, Procyclicality, Business cycle<br />

JEL Classification: E32, G21<br />

1 Introduction<br />

The risk sensitivity of capital requirement as proposed by Basel (1988 Basel Standard and Basel II) framework is<br />

considered leading to a certain degree of cyclicality in capital requirement that potentially will amplify business<br />

cycle fluctuation through decreasing bank’s lending activity during the downturns side of business cycle hence pose<br />

a threat to the stability of macroeconomic and financial system, in a so-called procyclicality of capital requirement.<br />

Many previous studies, among those are the works by Bikker & Metzmeker (2004), Ayuso et al. (2002), Chiuri et al.<br />

(2001) and Drumond (2009) confirm the procyclicality of capital requirement by pointing out to negative comovements<br />

between business cycle and banks’ capital. However, in practice, most banks hold more capital (capital<br />

buffer) than the regulatory minimum. Stronger supervision for market discipline, lesson learnt from the past crises<br />

and the need to adopt a sound risk management to anticipate increasing probability of default during economic<br />

downturns are some factors that motivate banks to hold more capital despite the fact it may more costly for banks to<br />

hold more capital (Borio, et al (2001) and Ayuso et al. (2002)). This additional buffer should provide banks with<br />

greater resilience that also enable them to maintain a reasonable volume of lending activities during economic<br />

downturns. Study by Jokipii & Milne (2006) finds that capital buffers of RAM (10 countries that joined the<br />

European Union (EU) in May 2004) banks correlate positively with the business cycle and banks in these countries<br />

tend to hold more capital buffer than those of banks in other EU regions. This confirms well capitalized bank may<br />

have countercyclical prudent behavior. Nevertheless, the above studies focus to identify the existence of<br />

procyclicality of capital requirement but lack in detail assessment on effect of the pattern on banks’ lending activity.<br />

Thus, even though it is generally considered that the capital requirement is procyclical, it still needs to assess<br />

whether this pattern will have substantial impact to the volume of credit to economy. This implies two policy<br />

questions, ie. (i) do bank’s capital buffer actually exhibit a significant procyclical pattern and (ii) does this pattern of<br />

capital buffer constraint loan supply of banks substantially?<br />

The aim of this paper is to answer the two policy questions. We employ annual bank-level panel data of 63<br />

listed banks with coverage period 1997-2009 in five Association of South East Asian Nations (ASEAN) countries:<br />

Indonesia, Singapore, Malaysia, Thailand and Philippine. To the best of our knowledge, none of existing studies has<br />

explored the evidence from ASEAN countries using data period that covers two major crises that hit the region: the<br />

1997/98 Asian Financial crisis and the 2008/09 Global Financial crisis. By examining the ASEAN data, the present<br />

study will contribute to the literature on this area.<br />

By developing two models to estimate the effect of business cycle on banks’ capital buffer and the effect of<br />

Capital buffer on bank’s loan supply we find strong evidence of procyclicality pattern of capital buffer among banks<br />

in ASEAN countries. Banks are found to reduce their loan growth during economic downturns due to a rise in<br />

capital buffer as a result of impaired loan quality (rising NPL). Nevertheless, this procyclicality effect is somewhat<br />

small, given that a decrease of 1 percentage point in GDP growth will reduce loan growth by around 0.4 percentage<br />

142


points due to the rise of capital buffer while during the observed period banks’ loan growth average in those ASEAN<br />

countries is around 11%. Moreover, the results also indicate that risk proxy NPL has significant and positive<br />

relationship with capital buffer, meaning that banks with a relatively risky credit portfolio tend to hold more capital<br />

buffer. This evidence which is also supported by the fact of ASEAN banks tendency to hold sizeable buffer above<br />

minimum requirement (the average of banks’ capital buffer is around 13.5% above the country’s minimum<br />

regulatory capital during the observed period) shows these ASEAN banks are adopting relatively sound risk<br />

management that has contributed to moderate the effect of procyclicality of capital requirement. These prudently<br />

capitalized banks come up after years of strengthened prudential regulation and supervisory framework as a result of<br />

lesson learnt from the region own financial crisis in 1997/98.<br />

2 Data, Methodology & Empirical Models<br />

2.1 Data<br />

We employ annual unbalanced bank-level panel data of 63 listed banks with coverage period 1997-2009 in five<br />

ASEAN countries: Indonesia, Singapore, Malaysia, Thailand and Philippine. Due to the data availability, we select<br />

banks that minimum have data coverage period from 2004-2009. The data period covers full business cycle in<br />

respective countries and two major crises that hit the region: the 1997/98 Asian Financial crisis and the 2008/09<br />

Global Financial crisis.<br />

Banks indicators data are obtained from Bankscope while macro economy data of each country are from CEIC.<br />

2.2 Methodology & Empirical Models<br />

To address the two policy questions, we adopt the strategy of Wong et al (2010) by developing two models, ie:<br />

1. Estimating the effect of business cycle on banks’ capital buffer<br />

In this model, capital buffer (Buffer) is modeled as a function of business cycle (proxied by real GDP growth) with<br />

control variables as prescribed in previous empirical studies including Return on Equity (ROE) as proxy for cost of<br />

holding capital and ratio of Non Performing Loan (NPL) as proxy for banks’ risk profile:<br />

Bufferi,t = α0 + α1Bufferi,t-1 + α2GDPj,t + α3NPLi,t + α4ROEi,t + μi + εi,t …………………………..(1)<br />

Where i= individual bank index, 1,2…,N; j=country index, 1,2…,M; t=year index, 1,2…,T; μi captures<br />

individual bank time invariant idiosyncrasies effect and εi,t is an error term.<br />

The dependent variable Buffer is defined as additional capital that is set by individual bank in a country above<br />

country’s minimum regulatory capital 1 , ie. bank’s Capital Adequacy Ratio - country’s regulatory minimum<br />

requirement.<br />

The inclusion of the first lag of Buffer as one of independent variable is intended to capture bank’s adjusting<br />

cost (Ayuso et al, 2002).<br />

Real GDP growth (GDP) is a proxy for business cycle indicator. A negative co-movement between GDP and<br />

Buffer indicates the procyclicality pattern of capital buffer. On contrast, a positive relationship between capital<br />

buffer and business cycle indicates banks adopt prudent capital behavior by increasing its capital during the upturns<br />

side of business cycle in order to have adequate buffer to cover loss that most likely to increase as economic enter<br />

downturn phase (Borio et al, 2001).<br />

The cost of holding capital is proxied by Return on Equity (ROE), and its effect on Buffer is expected to be<br />

negative.<br />

Banks’ risk profile is proxied by Non Performing Loan ratio (NPL) and its impact on capital buffer is also<br />

expected to be negative.<br />

1 Minimum regulatory capital ratio for Indonesia and Malaysia: 8%, Thailand:8.5%, Philipine:10% and Singapore: 10% since May 2004 (12%<br />

before May 2004).<br />

143


Given the dynamic nature of this model due to the inclusion of lagged Buffer as an independent variable, we<br />

estimate the model by employing two steps System Generalized Method of Moments system (GMM Sys) method.<br />

We choose GMM Sys over differenced GMM because when T is small and series persistency is high (α1 close to 1),<br />

the differenced GMM estimator has poor finite sample bias and low precision since lagged levels of the series<br />

provide weak instruments for subsequent first differences while utilizing GMM Sys will reduce this finite sample<br />

bias and increase the precision of estimator due to the exploitation of additional moment conditions (Blundell et al,<br />

2000, Bond et al, 2001). By utilizing Monte Carlo simulation, Soto (2009) provides evidence that GMM Sys<br />

generates lower bias and higher efficiency than other estimators for panel data with small number of cross section N<br />

and T and high persistency series. We also estimate the model using Least Square Dummy Variable (LSDV) and<br />

Ordinary Least Square (OLS) methods to check whether or not the coefficient of lagged capital buffer of GMM Sys<br />

estimation is biased. An unbiased coefficient of lagged capital buffer should lie between those estimated by FELS<br />

and OLS, given the fact that LSDV estimators are downward biased due to the negative correlation between the<br />

transformed lagged dependent variable and the transformed error term whilst OLS estimators are upward biased due<br />

to the positive correlation between the lagged dependent variable and the individual effects.<br />

2. Estimating the effect of Capital buffer on bank’s loan supply.<br />

In this stage, we model loan growth (Loan) as a function of banks’ capital buffer (Buffer), business cycle (GDP) and<br />

interbank market interest rate (IR):<br />

Loani,t = β0 + β1GDPj,t + β2IRj,t + β3Bufferi,t + ѵi + ζi,t ……………………………...(2)<br />

Where i= individual bank index, 1,2…,N; j=country index, 1,2…,M; t=year index, 1,2…,T; νi captures<br />

individual bank time invariant idiosyncrasies effect and ζi,t is an error term.<br />

The procyclicality impact of capital buffer on bank loan growth will be indicated by negative co-movement<br />

between Buffer and Loan. Both business cycle and interest rate are proxies for demand side factors of loan growth.<br />

Business cycle is expected to have positive correlation with loan growth while interest rate will have negative<br />

impact on loan growth.<br />

We estimate model (2) using LSDV method. Finally, the procyclicality effect of capital buffer on lending<br />

activity is then calculated as a multiplication of sensitivity of banks’ capital buffer to GDP growth in model (1) and<br />

sensitivity of loan growth to capital buffer in model (2):<br />

3 Empirical finding<br />

3.1 Descriptive statistics<br />

(α2*(mean GDP/mean Buffer) / (1-α1)) *( β3*(mean Buffer/mean Loan))<br />

Table 1 below shows that in general banks in ASEAN hold a sizeable capital buffer. The resilience and risk profile<br />

of banking system during the global financial crisis 2008/09 is much improved than a decade ago when Asia<br />

financial crisis hit the region in 1997/98 as indicated by almost double Buffer for period of 2008/09 than that of<br />

1997/98 as well as lower NPL in 2008/09 than that of 1997/98. The ASEAN region has also been considered to have<br />

high potential to emerge as another economic force in the world. Recent economic development as indicated by<br />

GDP growth has provided evidence of greater resilience of ASEAN countries in weathering the economic downturn<br />

caused by the 2008/09 global financial crisis, particularly Indonesia and Philippine that despite lower economic<br />

growth than those of previous years, still experience positive economy growth during the 2008/09 global crisis.<br />

144


3.2 Regression Result<br />

Mean Min Max St.Dev<br />

Period: 1997-2009<br />

Buffer 13.46 -8.5 133.3 18.10<br />

Loan 10.53 -237.53 194.40 35.74<br />

NPL 9.52 0.13 89.98 11.30<br />

ROE 13.69 0.00 59.55 8.72<br />

GDP 3.82 -13.13 9.24 4.24<br />

IR<br />

Period: 1997-1998 Asian Financial Crisis<br />

7.55 0.44 51.06 5.77<br />

Buffer 6.58 -8.00 37.20 9.95<br />

Loan -21.18 -135.05 68.81 46.49<br />

NPL 12.74 0.14 57.07 14.08<br />

ROE 12.51 0.00 53.76 11.37<br />

GDP -0.93 -13.13 8.55 7.53<br />

IR<br />

Period: 2008-2009 Global Financial Crisis<br />

17.89 1.50 51.06 15.21<br />

Buffer 11.46 1.88 121.00 15.54<br />

Loan 11.82 -83.50 193.42 23.03<br />

NPL 4.01 0.17 15.43 2.99<br />

ROE 11.59 0.23 37.39 6.41<br />

GDP 1.93 -2.30 6.01 3.95<br />

IR 2.87 0.44 11.24 3.35<br />

Table 1. Descriptive Statistics<br />

The results on the table below suggest that strong evidence of procyclicality pattern of capital buffer among banks in<br />

ASEAN countries. Banks are found to reduce their loan growth during economic downturns due to a rise in capital<br />

buffer as a result of impaired loan quality (rising NPL). Nevertheless, this procyclicality effect is somewhat small,<br />

given that a decrease of 1 percentage point in GDP growth will reduce loan growth by around 0.4 percentage points<br />

due to the rise of capital buffer, while during the observed period banks’ loan growth average in those ASEAN<br />

countries is around 11%. The estimation result of model (2) also indicates that credit rationing during economy<br />

downturn in ASEAN banks is driven more by demand side factors rather than by the supply-driven capital buffer.<br />

Dependent Variable<br />

Buffer<br />

Loan<br />

Estimation Method<br />

Independent variable<br />

OLS LSDV GMM Sys LSDV<br />

c -0.38 2.57 0.51*** 21.31***<br />

[-0.26] [1.04] [2.96] [4.17]<br />

Buffer(i,t-1) 0.88*** 0.65*** 0.80***<br />

[26.08] [9.57] [293.56]<br />

GDP(j,t) -0.27** -0.18** -0.45*** 1.88**<br />

[-2.26] [-2.03] [-20.40] [2.30]<br />

NPL(i,t) 0.28** 0.33* 0.49***<br />

[2.38] [1.65] [41.87]<br />

ROE(i,t) 0.08 0.04 0.02<br />

[1.61] [0.78] [1.61]<br />

IR(j,t) -1.30***<br />

[-4.30]<br />

Buffer(i,t) -0.51***<br />

[-4.75]<br />

Adj-R -sqr 0.79 0.81 0.19<br />

DW 1.92 1.93 1.70<br />

AR (1) (p-val) -1.92 (0.06)*<br />

AR (2) (p-val) 0.48 (0.63)<br />

Sargan test (p-val) 55.71 (0.37)<br />

Table 2. Regression Result<br />

Note: *,**,*** indicate a level of confidence of 90%, 95% and 99%, respectively.<br />

Moreover, the results of model (1) also indicate that risk proxy NPL has significant and positive relationship<br />

with capital buffer, meaning that banks with a relatively risky credit portfolio tend to hold more capital buffer. This<br />

evidence shows that banks in ASEAN region are adopting relatively sound risk management that has contributed to<br />

moderate the effect of procyclicality of capital requirement. This relatively sound risk management is also<br />

145


supported by the evidence of the tendency of ASEAN banks to hold sizeable buffer above minimum requirement<br />

(the average of banks’ capital buffer is around 13.5% above the country’s minimum regulatory requirement during<br />

the observed period). These prudently capitalized banks come up after years of stronger supervision as a result of<br />

lesson learnt from the region own financial crisis in 1997/98.<br />

4 Conclusion and Policy Implication<br />

By developing two models to estimate the effect of business cycle on banks’ capital buffer and the effect of Capital<br />

buffer on bank’s loan supply on annual panel data (1997-2009) of 63 commercial banks in ASEAN countries, we<br />

find strong evidence of procyclicality pattern of capital buffer among banks in ASEAN countries. Banks are found<br />

to reduce their loan growth during economic downturns due to a rise in capital buffer as a result of impaired loan<br />

quality (rising NPL). Nevertheless, this procyclicality effect is somewhat small, given that a decrease of 1<br />

percentage point in GDP growth will reduce loan growth by around 0.4 percentage points due to the rise of capital<br />

buffer.<br />

As Basel Committee for Banking Supervision (2010) proposes a new capital requirement regime to address<br />

issue on procyclicality of capital requirement, of which banks are required to set conservative capital and<br />

countercyclical capital buffer, this empirical finding may become input for country’s bank regulator when<br />

considering implementing this new capital regime. Taking into account the nature of their banks procyclicality<br />

effect, banks’ regulator may determining optimal capital buffer level in such a way that it will effective to prevent<br />

volume of credit from being excessive during the upturn sides of business cycle while provide banks greater<br />

resilience that enable them to continue reasonable lending activities during the downturn sides of business cycle.<br />

5 References<br />

Ayuso, J., A. Gonzales, J. Saurina. 2004. Are capital buffers pro-cyclical?: Evidence from Spanish panel data.<br />

Journal of Financial Intermediation 13,249-264.<br />

Basel Committee on Banking Supervision. 2010. Countercyclical capital buffer proposal. Consultative document.<br />

Bank for International Settlements.<br />

Bikker, J., P. Metzemakers. 2004. Is bank capital procyclical? A cross country analysis. DNB Working Paper<br />

No.009/2004, De Nederlandsche Bank NV.<br />

Blundell, R., Bond, S., and Windmeijer, F. 2000. Estimation in dynamic panel data models: improving on the<br />

performance of the standard GMM estimators. The Institute of Fiscal Studies Working Paper, No. 00/12.<br />

Bond. S., Leblebiciouglu, A., and Schiantarelli, F., 2004, GMM estimation of empirical growth models, mimeo,<br />

September 2001.<br />

Borio,C.,C.Furfine, and P. Lowe. 2001. Pro-cyclicality of the Financial System and Financial Stability: Issues and<br />

Policy Option, BIS Papers, No. 1.<br />

Chiuri M C, Ferri G and Majnoni, G, 2001. The macroeconomic impact of bank capital requirements in emerging<br />

economies; past evidence to assess the future, mimeo, World Bank.<br />

Financial Stability Forum, 2009. Addressing Procyclicality in the Financial System.<br />

Soto, M. 2009. System GMM estimation with a small sample. Institut d’Analisi Economica, Barcelona.<br />

Wong, E., Fong, T., and Choi, H. 2010. An empirical assessment on procyclicality of loan-loss provisions of banks<br />

in EMEAP Economies. Presentation delivered at 11 th Annual Bank of Finland/CEPR conference, Helsinki 7-8<br />

October 2010.<br />

Drumond, I. (2009). Bank Capital Requirements, Business Cycle Fluctuations and the Basel Accords: A Synthesis.<br />

Journal of Economic Surveys, Vol. 23, Issue 5, pp. 798-830.<br />

Jokipii, T and Milne, A. (2006). The cyclical behavior of European bank capital buffer. Research Report. Swedish<br />

Institute for Financial Research.<br />

146


DETERMINING EFFECTIVE INDICATORS FOR CUSTOMER PERFORMANCE IN REPAYING BANK<br />

LOAN (CASE STUDY: IRANIAN BANKS)<br />

Fariba SeyedJafar Rangraz, Amirkabir University of Technology<br />

Dr. Naser Shams, Amirkabir University of Technology<br />

EMAIL: RANGRAZFARIBa6384@gmail.com<br />

Abstract Nowadays, decision making in the field of investment and banking has become an important issue. This is due to unstable<br />

economic conditions and unknown factors leading to people’s success. Developing countries are faced with limited capital. Hence,<br />

there is less attention to bank customers and a framework has not been developed to scientifically evaluate the project process.<br />

Therefore, there are lots of unfinished or lost projects in those countries, including Iran.<br />

This paper aims to evaluate the money lending process based on Iranian bank’s customers. The 5C’s model will be used to measure<br />

the relationship between each element of the model by taking into consideration the factors leading to customer satisfaction. A<br />

quantitative method will be used to measure the findings. From an Iranian banks expert’s point of view, character is the most<br />

important index among 5C’s in order to lending and it has the highest value.<br />

Key words: Banking, 5C’s, lending, repay, scoring<br />

1. Introduction<br />

Decreasing risks and the descending possibility of not repaying are ways used by banks referred to as Computation<br />

technique. However, the unstable economy and external environmental factors force banks to review their method of<br />

decision making.<br />

K. Bryant has presented a model for evaluating agronomic loans (Bryant, k.,2001). He has used Duchessi<br />

research for designing his model. According to Bryant, it is necessary the lender knows 5 factors when consumer<br />

credit is being evaluated (Duchessi, P., 1995). These factors are: credit, capital, capacity, collateral and character.<br />

Bryant introduced an expert system for decision making (ALEES). In this system, quality factors such as skill,<br />

experience and the intelligence of the loan expert are keys while quantity factors are mixed.<br />

Rosman and others have studied different strategies that lenders use for confirming borrowers. They suggest<br />

that the structure of the lending process is in 2 parts: financial and nonfinancial. There are some features in each part<br />

(Rosman Andrea., and Bedard, Jean, 1999)<br />

Tansel and Yardakhl presented a model for evaluating decisions. This model is based on determining the credit<br />

of manufacturing firms that want to borrow from the Turkish banks (Yardakhl, M. , and Tansel, Y., 2004). In this<br />

model, all quality and quantity indexes related to the profitability of a company have been recognized. The obtained<br />

results can give a special score to the company using the AHP technique.<br />

Chan and Mak presented a method showing the benefits of selecting an improved manufacturing technique<br />

consisting of the following three approaches (Chan, F.T.S., and Mak, K.L. , 2000):<br />

� Strategic approach: achieving common goals<br />

� Economic approach: economic benefits<br />

� Analytical approach: economical and non-economic benefits and risks<br />

Reitan introduced efficient factors when evaluating loans. He estimated the ability of experts in determining the<br />

value and measuring factors of the project. He compared them based on the importance of each factor. He also<br />

concluded why certain evaluating processes are incorrect. He categorized 42 indexes into 6 groups. These are the<br />

Manager’s character and experience, services offered, market conditions, different pertinent aspects of the<br />

organization and financial matters (Reitan, Bjornar., 1998).<br />

In this paper, the factors affecting the lending decision making process has been studied. Different results due to<br />

various situations, features and abilities of the borrower have been attained.<br />

147


This information is categorized in 5 groups. The effective factors in bank decision making are named the 5C’s.<br />

They are (Tim Hill of COCC):<br />

� Character<br />

� Capacity<br />

� Condition<br />

� Capital<br />

� Collateral<br />

1.1. Character<br />

Character is the customer’s impact on the lender. The lender decides whether the customer is reliable to repay<br />

Therefore, the customer’s education and experience in that project would be surveyed. Also, an employee’s quality<br />

of performance and experience should be considered (Tim Hill of COCC).<br />

1.2. Capacity<br />

The capacity of the loan applicant refers to their imbursement ability and it is evaluated using expected cash flow.<br />

For those customers requesting short term loans, banks experts would pay attention to the method of payment such<br />

as cash. Other factors considered are the customer’s assets which can be transformed to cash immediately. Future<br />

cash flow and the company’s potential in honoring short time commitments are considered too. In contrast, clients,<br />

who extend long term loans, usually focus on long term assets. Other factors considered are the difference between<br />

the assets and debts, permanent activities, profitability and the company’s potential (Accounting organization of<br />

Iran, 2007).<br />

1.3. Capital<br />

Capital is equal to the net worth or net asset. Moreover, it is the reflection of the company’s success and value.<br />

Although this criterion can be sinister with considering variability of asset worth and official worth, official accounts<br />

may not show asset market worth. However inflation and increase in price level is the reason for this and sometimes<br />

account based asset value is less than market worth. But, considering worth based on asset official worth is not<br />

critical for the lender. Capital is the money which a client invests personally on a project and it is explanatory of<br />

corporation in loss if the firm is broken (SBA, North Catalina). Perhaps it’s best to use the word ‘bankrupt’ or<br />

‘folds’ instead of ‘broken’.<br />

1.4. Collateral<br />

Collateral is studied from a lending aspect and is one of the basic factors in the lending decision making process.<br />

The other notable role of collateral is being one of the effective elements in evaluating the lender and nominating his<br />

credit condition. Regardless of how much the employees of a bank are careful and cautious in lending; there<br />

sometimes would be problems in repayment. In this situation, the alternative solution is collateral. The Bank must<br />

decide what type of collateral is needed to recoup the loan. Property, a house or land, equipment or personal<br />

assurance may be required (Edward Poll,J.D.,M.B.A.,CMC,2003).<br />

2. Research method<br />

The method of research for this paper has been based on Hypothesis testing, but research goals have not been<br />

limited in edge of Hypothesis testing Other testing methods besides Hypothesis testing have also been used. This<br />

paper is heuristic kind, from environmental studies aspect it is survey kind. Due to conceptual base, this paper is<br />

descriptive because it studies what it is. The technique used for gathering information was a questionnaire.<br />

Some of Iranian banks experts were interviewed and finally a questionnaire was selected as the best means for<br />

gathering information for this research. For evaluating indexes, a questionnaire was designed consisting 26<br />

questions. These questions have been categorized in 5 groups like the 5C’s indexes.<br />

148


It should be mentioned that the Likert scale used for scoring factors 1,3,5,7,9 were used to show more variance<br />

among groups instead of 1,2,3,4,5. So, the accuracy rose.1 was defined as the less important item and 9 as the most.<br />

The description of scoring can be seen in the appendix.<br />

The statistical society for this research consists of the Iranian banks experts. Selected banks were from<br />

government owned banks and informal banks in order to gather a comprehensive aspect of Iranian banks instead of a<br />

specific kind.<br />

The sampling method in this study was random.70 of questionnaires were gathered completely. You can see the<br />

questionnaire that is used in this research in the appendix.<br />

2.1. Research hypothesize<br />

The main hypothesis was: “customer financial indexes are notable in bank expert decision making.”<br />

The secondary hypothesis was: “how loan applicant character, capacity, capital, condition and collateral are<br />

effective in bank expert decision making.”<br />

2.2. Method of scoring indexes<br />

The Friedman Test was used for credit scoring but in this method, the weight of indexes, which could be achieved<br />

from number of questions, has not been attended. For solving this problem, this method was used:<br />

� Firstly, experts score questions using the scale that was mentioned.<br />

� For the grading indexes, the means of each question scores was estimated and then mean of each index<br />

scores was calculated.<br />

� Then, it was considered every question has 10 points. So the number of questions for each index was<br />

multiplied by 10. Afterwards, the result was divided by the number of all questions (26). The final result<br />

was the coefficient. The total of these coefficients important should be equal to 100.<br />

� In next step, the means of each factor was multiplied by coefficient.<br />

� This number was multiplied by sum of mean*coefficient of each index, and finally the score of each index<br />

was obtained.<br />

3. Tests and results<br />

For evaluating reliability of the questionnaire, Cronbach's Alpha test was applied using SPSS software, the result<br />

showed that Cronbach's Alpha was higher than 0.7 which proved the reliability of the answers.<br />

Before studying the results of the different tests, it should be mentioned that these tests were conducted using<br />

SPSS software.<br />

3.1. BOXPLOT<br />

This graph was used for diagnosing outliers and controlling symmetric of data distribution. In graph below<br />

BOXPLOT of data can be seen:<br />

149


Figure1-BOXPLOT of data<br />

The graph illustrates Skewness to left in character, condition and collateral can be seen obviously. Also Skewness to<br />

left in capacity is clear. Data distribution in capital is symmetric. In collateral, outlier can be seen.<br />

3.2. Histogram method<br />

Histograms of indexes were extracted for checking normalization. The histogram of character data is shown in<br />

figure2 as an example.<br />

.<br />

Figure2 - Histogram of character data<br />

Figure 2 and other Histograms show distribution of none of the societies is normal.<br />

3.3. QQPLOT<br />

QQPLOT was used for studying normalization. QQPLOT Graph for Character is shown in figure3 as an example:<br />

150


Figure3-QQPLOT for character data<br />

Figure 3 and other QQPLOT graphs show that none of the societies are normal.<br />

3.4. Kolmogorov-Smirnov Test<br />

Kolmogorov-Smirnov Test for studying normalization was used, results are below:<br />

Character Capacity Capital condition collateral<br />

N 576 560 140 280 140<br />

Normal Parameters a<br />

Mean 6.8889 6.2643 6.9429 5.8143 8.2857<br />

Std. Deviation 2.29665 2.08276 1.69541 2.07247 1.22495<br />

Most Extreme Differences Absolute .255 .220 .228 .216 .420<br />

Positive .179 .146 .187 .153 .280<br />

Negative -.255 -.220 -.228 -.216 -.420<br />

Kolmogorov-Smirnov Z 6.121 5.211 2.695 3.621 4.971<br />

Asymp. Sig. (2-tailed) .000 .000 .000 .000 .000<br />

a. Test distribution is Normal.<br />

Table1-results of One-Sample Kolmogorov-Smirnov Test<br />

Results of Histogram part, QQPLOT part, and Kolmogorov-Smirnov Test part show that societies are not<br />

normal and T-test or Z-test can’t be used.<br />

3.5. Coherency test of samples<br />

Non-parametric Friedman Test was used for comparing coherent coherence samples. These hypothesize were<br />

considered:<br />

H 0: There is no coherence among 5 indexes.<br />

H 1: 5 indexes are independent.<br />

N 140<br />

Chi-Square 527.79<br />

Df 4<br />

Asymp. Sig. .000<br />

Table2- results of Friedman Test<br />

151


Table 2 shows that indexes are important in this order: collateral, capital, condition, character and capacity.<br />

(This result does not consider coefficient). Also results show that in the significant level of � = 0.05,<br />

5 indexes are<br />

interdependent.<br />

4. RESULT OF INDEXES SCORING<br />

The previous mentioned method was used for scoring indexes, results of that could be seen in table3:<br />

Index Sum(Mean of each factor* coefficient)/<br />

(Mean of each factor* coefficient)*100<br />

Character 39.95<br />

Capacity 28.98<br />

Capital 8.03<br />

Condition 13.45<br />

Collateral 9.58<br />

Table3-results of indexes scoring<br />

Table 3 shows that indexes scoring ends to this order: Character, Capacity, Condition, Collateral, and Capital.<br />

As it is seen, from Iranian banks expert’s point of view, character is the most important index among 5C’s and it<br />

has the highest value.<br />

5. Conclusion<br />

According to the final results, Iranian banks experts believe that character is the most important factor among 5C’s<br />

and they pay attention to this more. Character refers to tendency of costumer to honor his commitments well. It can<br />

be said that character means commitment. Furthermore, people’s effort is as their character in order to develop a<br />

successful project. Moreover, ability to repay on time, the loyalty of employees, their educational background, and<br />

duration of staying in a stable place are among these.<br />

Besides, capacity is seen as the second important factor in Iranian banking. The main factors in decision<br />

making are evaluating financial power and profitability, forecasting future activities, information about cash flow<br />

and the method of achieving and consuming cash. The latter are also the basic parameters for viewing capacity.<br />

Capacity or surveying cost ratio is achieved from product cost and capital cost. Increasing costs and<br />

decreasing income that id related to a low sale can affect loan repayment and have a negative impact on repayment<br />

costs.<br />

The applicant’s power of loan repayment is highly related to their ability to use resources correctly and gain<br />

profit from economic unit facilities. Then lending should be performed after estimating real necessity of the<br />

company. It should be proportionate to the company’s ability in using cash.<br />

Financial analysis techniques could be used when surveying risks and other effective factors in evaluating<br />

company’s capacity.<br />

Furthermore, among these 5 indexes, customer capital has little score. Capital represents the customer’s<br />

share in the project, so this concept is an important factor in showing the customer’s commitment to do the project<br />

as well as he can.<br />

While a firm uses bank loan in order function commercially, the proportion of capital sources for the firm<br />

when acceptance of financial share is high, the risk would transfer from bank to the firm. This is due to the unclear<br />

financial position of firm.<br />

152


At the end, the lowest score is given to Collateral. Collateral provides a mechanism to repay a specific debt<br />

when delinquent, at this point; the banks experts are sure about repayment.<br />

Finally, collateral is identified as having the least value among 5C’s from Iranian banks experts’ point of view.<br />

As discussed above, all the items are important in decision making and none can be ignored. Recognition of the<br />

importance of these aspects helps banks to correct their method of decision making. It should be noted that ignoring<br />

items like collateral and condition could become a risk for bank capital.<br />

6. Refrences<br />

Bryant, k.;"An agricultural loan evaluation expert system", Expert system with application, 2001, pp75-85.<br />

Duchessi, P.;"A knowledge engineered system for commercial loan decisions", Financial management, 1995, (17-3),<br />

pp 57-65.<br />

Rosman Andrea., and Bedard, Jean."Lenders Decision Strategies and loan structure decisions", Journal of Business<br />

Research, 1999, pp.83-94.<br />

Yardakhl, M. , and Tansel, Y. ;"AHP approach in the credit evaluation of manufacturing firms in Turkey'', Int. J. of<br />

production Economics, 2004(88),pp. 269-289.<br />

Chan, F.T.S., and Mak, K.L. ;"An Integrated Approach to Investment Appraisal for Advanced manufacturing<br />

technology". ,Human factors and Ergonomics in manufacturing,2000(19)<br />

Reitan, Bjornar.;" Criteria Used by Private Bank Officers to Evaluate Nea Ventures: An Analysis of Gaps and<br />

Shortcomings" http://www.sbaer.uca.edu/Research/1998/ICSB/n015.htm<br />

Tim Hill of COCC, "The 5C's getting money from a bank"<br />

Accounting organization, "Iranian accounting based theories", publishing committee of standards of Iran accounting,<br />

Tehran 2007<br />

"The 5C's of credit-small business resource", SBA(US Small Business administration)-North Catalina<br />

Edward Poll,J.D.,M.B.A.,CMC, " Understanding the four C's", Lawyers and bank loans ,law practice TODAY,<br />

september2003<br />

7. APPENDIXES<br />

Questionnaire:<br />

How much does each of these factors effect in tour decision to lend?<br />

For scoring factors, consider that:<br />

1= means that this item is the less important item from banks experts point of view for lending.<br />

3= means that this item is a few important item from banks experts point of view for lending.<br />

5= means that this item is a neural item from banks experts point of view for lending.<br />

7= means that this item is an important item from banks experts point of view for lending.<br />

9= means that this item is the most important item from banks experts point of view for lending.<br />

Character questions:<br />

Customer education: …………..<br />

Company activity in related industry: …………..<br />

Customer credit in industry: …………..<br />

Customer background in past loans and his repayment method in the past: …………..<br />

Commercial fame: …………..<br />

Loan repayment: …………..<br />

Loan application times: …………..<br />

Customer credit background in other banks: …………..<br />

Company management constancy: …………..<br />

Profitability: …………..<br />

Capacity questions:<br />

Company financial ability in repaying: …………..<br />

153


Company financial statements: …………..<br />

Fixed asset value: …………..<br />

Liquidity asset and cash flow: …………..<br />

Obtaining financial resource method: …………..<br />

Product variety: …………..<br />

Competitors: …………..<br />

Target market: …………..<br />

Capital questions:<br />

Account based asset value: …………..<br />

Company management ability and his background: …………..<br />

Condition questions<br />

Encountering national economical fluctuation: …………..<br />

Encountering international economical fluctuation: …………..<br />

Economic and politic society condition: …………..<br />

Place of related industry in society: …………..<br />

Collateral questions:<br />

Amount of customer collateral: …………..<br />

154


TWO-STAGE DEA APPROACH TO EVALUATE THE EFFICIENCY OF BANK BRANCHES<br />

Akram Bodaghi & Heidar Mostakhdemin Hosseini<br />

Mazandaran University of Science and Technolog & Tosse-Eh Farda Bank, Iran.<br />

Email: akram.bodaghi@gmail.com, m_hosseini@cid.ir, www.cid.ir<br />

Abstract. In emerging market economies, performance analyses in the services industries, specially banking sector, take more and<br />

more attention. This paper illustrates an evaluation of 50 pilot branches of one Iranian private bank, Tosse-Eh Farda Bank (previously<br />

Tosse-Eh Credit Institute), using Data Envelopment Analysis (DEA). Additionally, we implement an extension to this model with the<br />

aim of ranking the efficient Decision Making Units (DMUs) applying Ordinary Linear Square (OLS). Special emphasis was placed on<br />

how to present the DEA results to management in order to provide more guidance on what to manage and how to accomplish the<br />

changes. Eventually, the potential management uses of the DEA results were presented.<br />

Keywords: Data Envelopment Analysis (DEA), Linear programming, Banking, Efficiency, Ordinary Linear Square, Ranking<br />

JEL Classification: C52, C82, D61, G21<br />

1 Introduction<br />

In retrospect, banks have focused on various profitability measures to evaluate their performance. Usually multiple<br />

ratios are selected to focus on the different aspects of the operations. However, ratio analysis provides relatively<br />

insignificant amount of information when considering the effects of economies of scale, the identification of<br />

benchmarking policies, and the estimation of overall performance measures of firms. As alternatives to traditional<br />

bank management tools, frontier efficiency analyses allow management to objectively identify best practices in<br />

complex operational environments. Compared to other approaches, Data Envelopment Analysis (DEA) is a better<br />

way to organize and analyze data since it allows efficiency to change over time and requires no prior assumption on<br />

the specification of the best practice frontier. In addition, it permits the inclusion of random errors if necessary.<br />

Since the introduction of DEA technology, a considerable number of researchers have applied it in financial service<br />

industry. Cook et al. (2000) investigated the use of quantitative variable in bank branch evaluation using DEA.<br />

Paradi and Schaffnit (2004) evaluated the performance of the commercial branches of a large Canadian bank. They<br />

introduce non-discretionary factors to reflect specific aspects of the environment a branch is operating in. Asmild et<br />

al. (2004) evaluate the performance of Canadian banking industry over time. Bala and Cook (2003) incorporate<br />

expert knowledge within the DEA framework.<br />

They first apply a discriminate or classification tool to quantify the functional relation that best captures the<br />

expert's mental model for performance. The outcome of this first phase is an orientation of variables to aid in the<br />

definition of inputs and outputs. The resulting orientation then defines the DEA model that makes up the second<br />

phase of the model. Camanho and Dyson (2005) investigated the bank branch performance under price uncertainty.<br />

Halkos and Salamouris (2004) measured the Greek bank performance using DEA. Isik and Kabir (2003) utilize a<br />

DEA-type Malmquist Total Factor Productivity Change Index to examine productivity growth, efficiency change,<br />

and technical progress in Turkish commercial banks during the deregulation of financial markets in Turkey.<br />

Additionally, Guan and Dipinder (2005), Athanassopoulos and Giokas (2000), Devaney and Weber (2004), Pille and<br />

Paradi (2002), Mercan al at. (2003), Penny (2004), studied the use of DEA in financial institutions, to mention a<br />

few.<br />

This paper presents an evaluation of 50 pilot branches of the first Credit Institute in Iran using DEA. The rest of<br />

the paper is organized as follows. Section 2 gives a brief review of DEA. Section 3 provides the models,<br />

methodology utilized in this paper and then DEA results and the potential use of the DEA results were presented.<br />

Finally, the conclusions are given in section 4.<br />

155


2 The Principles of Data Envelopment Analysis<br />

Production process can be defined as a process that can turn a set of resources into desirable outcomes by production<br />

units. During this process, efficiency is used to measure how well a production unit is performing in utilizing its<br />

resources to generate the derived outcomes. Each of the various DEA models seeks to determine which of the n<br />

decision making units (DMUs) define an envelopment surface that represents the best practice, referred to as the<br />

empirical production function or the efficient frontier. Units that lie on the surface are deemed efficient in DEA<br />

while those units that do not, are termed inefficient. DEA provides a comprehensive analysis of relative efficiencies<br />

for multiple input-multiple output situations by evaluating each DMU and measuring its performance relative to an<br />

envelopment surface composed of other DMUs. Those DMUs forming the efficiency reference set are known as the<br />

peer group for the inefficient units. As the inefficient units are projected onto the envelopment surface, the efficient<br />

units closest to the projection and whose linear combination comprises this virtual unit form the peer group for that<br />

particular DMU. The targets defined by the efficient projections give an indication of how this DMU can improve to<br />

be efficient. Consider n DMUs to be evaluated, DMU j (j=1,2…n) consumes amounts j X ={ x ij } of inputs (i=1,<br />

2, …, m) and produces amounts j Y ={ rj y }of outputs (r=1 ,…, s). The efficiency of a particular DMU 0 can be<br />

obtained from the following linear programs (input-oriented BCC model (1984)).<br />

(1)<br />

min<br />

s.<br />

t.<br />

Y�<br />

� s<br />

�X<br />

�<br />

0<br />

� �<br />

� , �,<br />

s , s<br />

�<br />

1 � � 1<br />

�<br />

�,<br />

s , s<br />

� Y<br />

0<br />

� 0<br />

0<br />

� X�<br />

� s<br />

�<br />

z<br />

� � � �.<br />

1 s<br />

�<br />

� 0<br />

�<br />

�<br />

�<br />

� �.<br />

1 s<br />

�<br />

Performing a DEA analysis actually requires the solution of n linear programming problems of the above form,<br />

one for each DMU. The optimal variable θ is the proportional reduction to be applied to all inputs of DMU0 to move<br />

it onto the frontier. A DMU is termed efficient if and only if the optimal value θ* is equal to 1 and all the slack<br />

variables are zero. This model allows variable returns to scale. The dual program of the above formulation is<br />

illustrated by:<br />

(2)<br />

min<br />

s.<br />

t.<br />

v<br />

T<br />

� Y � v<br />

� v<br />

u<br />

T<br />

0<br />

X<br />

T<br />

� , v<br />

0<br />

free<br />

w<br />

T<br />

0<br />

� 1<br />

� �Y<br />

� u<br />

X � u<br />

�<br />

T<br />

� � � ��.<br />

1<br />

�<br />

� ��.<br />

1<br />

0<br />

0<br />

�<br />

0<br />

1 � 0<br />

�<br />

If the convexity constraint ( 1 � �1<br />

) in (1) and the variable u0 in (2) are removed, the feasible region is<br />

enlarged, which results in the reduction in the number of efficient DMUs, and all DMUs are operating at constant<br />

156


eturns to scale. The resulting model is referred to as the CCR model. The reader is advised to consult the textbook<br />

by Cooper, Seiford and Tone (2000) for a comprehensive treatment of DEA theory and application methodology.<br />

3 Specifications and Data<br />

A number of different approaches can be used for modeling the banking processes. Each of which is used to obtain a<br />

different aspect of efficiency measures. The most important two approaches are the production approach and the<br />

financial intermediation approach. Under the production approach, banks are viewed as institutions making use of<br />

various labor and capital resources to provide different products and services to customers. Thus, the resources<br />

being consumed such as labor and operating cost are deemed as inputs while the products and the services such as<br />

loans and deposits are considered outputs of the banks. This model measures the cost efficiency of the banks. Under<br />

the financial intermediation approach, banks are viewed as financial intermediaries collecting deposits and other<br />

loanable funds from depositors and lend them as loans or other assets to others for profit. The different forms of<br />

funds that can be borrowed and the cost associated with performing the process of intermediation are considered as<br />

inputs. The forms in which the funds can be lent are outputs of the model. This model measures the economic<br />

viability of the banks. There were two inputs (official cost and wage cost) and six outputs (Different kind of<br />

Deposits, Loans and different type of facilities, Delaying Claims, Operational profit, Other Services and Branches’<br />

superficial quality) in the DEA model. Input orientation (the LP is oriented to minimize inputs) was selected for the<br />

DEA models in this research. I point out that management was more interested in minimizing the consumption of<br />

inputs subject to attaining the desired output levels. BCC model is utilized so as to consider the size effect.<br />

3.1 The Proposed Method<br />

In this analysis, 50 branches of a private bank in Tehran, Iran are evaluated. The degree of correlation between<br />

inputs and outputs is an important issue that has great impact on the robustness of the DEA model. Thus, a<br />

correlation analysis is imperative to establish appropriate inputs and outputs. On the one hand, if very high<br />

correlations are found between an input variable and any other input variable (or between an output variable and any<br />

of the other output variables), this input or output variable may be thought of as a proxy of the other variables.<br />

Therefore, this input (or output) could be excluded from the model. On the other hand, if an input variable has very<br />

low correlation with all the output variables (or an output variable has very low correlation with all the input<br />

variables), it may indicate that this variable does not fit the model. Correlation analyses were done for each pair of<br />

variables and the following table presents the details.<br />

Wage Cost Official<br />

Cost<br />

Deposits<br />

Loan &<br />

Facilities<br />

Delaying<br />

Claims<br />

Operational<br />

Profit<br />

Other<br />

Services<br />

Superficial<br />

Quality<br />

Wage Cost 1.00 0.81 0.79 0.70 0.80 0.44 0.86 0.32<br />

Official Cost 0.81 1.00 0.51 0.53 0.48 0.20 0.58 0.57<br />

Deposits 0.79 0.51 1.00 0.51 0.71 0.80 0.94 0.31<br />

Loan & Facilities 0.70 0.53 0.51 1.00 0.86 0.08 0.58 0.20<br />

Delaying Claims 0.80 0.48 0.71 0.86 1.00 0.23 0.76 0.28<br />

Operational Profit 0.44 0.20 0.80 0.08 0.23 1.00 0.69 0.25<br />

Other Services 0.86 0.58 0.94 0.58 0.76 0.69 1.00 0.26<br />

Superficial Quality 0.32 0.57 0.31 0.20 0.28 0.25 0.26 1.00<br />

Table 1: Correlation Coefficients between the Inputs and Outputs<br />

I did not find any evidence of very high correlation between any one input variable and any other (nor between<br />

output variables) and any one input variable having very low correlation with any of the output variables (nor<br />

between output variable and input variables) in the above five tables. This is a reasonable validation of my DEA<br />

models. Otherwise, the sensitivity analysis on the impact of including and excluding different variables on the<br />

efficiency should be performed. The input oriented BCC model is run and Table 2 summarizes the results for the<br />

model.<br />

157


3.2 The Efficient DMUs Ranking<br />

Topics Result<br />

Average Score 0.86<br />

Standard Deviation 0.19<br />

Maximum Efficiency Score 1.00<br />

Minimum Efficiency Score 0.23<br />

Number of DMUs 50<br />

Number of Efficient DMUs 23<br />

Number of Efficient DMUs<br />

exhibiting IRS<br />

3<br />

Number of Efficient DMUs<br />

13<br />

exhibiting CRS<br />

Number of Efficient DMUs<br />

exhibiting DRS<br />

Table 2: DEA Results BCC<br />

We employ Data Envelopment Analysis (DEA)-BCC model to evaluate decision-making units (DMUs) into<br />

efficient and inefficient ones base upon the multiple inputs and output performance indices. Afterwards, we consider<br />

that there is a centralized decision maker (DM) who ‘owns’ or ‘supervises’ all the DMUs. The DM has an interest in<br />

discriminating the efficient DMUs (eDMUs). We present a new method that determines the most compromising set<br />

of weights for the indices’. The total of the new efficiency scores of eDMUs with the most compromising set of<br />

indices’ weights has the least total gaps to the compromised data. The eDMUs that have efficiency score equal to<br />

one are located on the data. The other eDMUs are either located above or below the data. The approach is analog to<br />

the ordinary linear square method (OLS) of the residuals in statistical regression analysis. Finally, by this we could<br />

rank all the branches.<br />

4 Conclusions<br />

This paper uses DEA to assess the branch performance in a private bank. The branches operate fairly efficiently on<br />

the whole although there is still room for improvement. Special emphasis was placed on how to present the DEA<br />

results to management so as to provide more guidance to them on what to manage and how to accomplish the<br />

changes. Eventually, recommendations to management’s use of DEA results were given.<br />

5 Acknowledgements<br />

I owe special thanks to Tosse-Eh Farda Bank for providing the relevant data set and also Professor Heidar<br />

Mostakhdemin Hosseini of Raja University for his valuable comments.<br />

158<br />

7


6 References<br />

Asmild Mette, Paradi J. C., Aggarwall Vanita, and Schaffnit Claire (2004), “Combining DEA Window Analysis<br />

with the Malmquist Index Approach in a Study of the Canadian Banking Industry”, Journal of Productivity<br />

Analysis 21, No. 1, 67-89.<br />

Athanassopoulos, A.D. and Giokas, D. (2000), “The Use of Data Envelopment Analysis in Banking Institutions:<br />

Evidence from the Commercial Bank of Greece, Interfaces, Vol 30, no 2, 81-95.<br />

Bala, Kamel and Cook, Wade D. ( 2003), “Performance Measurement with Classification Information: an Enhanced<br />

Additive DEA Model”, Omega 31, No. 6, pp439-450.<br />

Banker, R., D., Charnes, A., and W. W Cooper (1984), "Models for Estimating Technical and Scale Efficiencies in<br />

DEA", Management Science 30(9): pp.1078-1092.<br />

Casu, B., Girardone, C., and Molyneux, P. (2004), “Productivity Change in European Banking: a Comparison of<br />

Parametric and Non-Parametric Approaches”, Journal of Banking and Finance 28, No. 10, 2521-2540.<br />

Camanho A.S., Dyson and R.G. (2005), “Cost Efficiency Measurement with Price Uncertainty: a DEA Application<br />

to Bank Branch Assessments”, European Journal of Operational Research 161, No. 3, 432-446.<br />

Cook, W.D., Hababou, M. and Tuenter, H.J. (2000), “Multicomponen Efficiency Measurement and Shared Inputs in<br />

Data Envelopment Analysis: an Application to Sales and Service Performance in Bank Branches”, Journal of<br />

Productivity Analysis 14, 209-224.<br />

Cooper, W.W., L.M. Seiford and K. Tone (2000), “Data Envelopment Analysis: A Comprehensive Text with<br />

Models, Applications, References”, Kluwer Academic Publishers, Boston.<br />

Guan, H.L. and Dipinder, S.R. (2005), “Competition, Liberalization and Efficiency: Evidence from a Two-Stage<br />

Banking Model on Banks in Hong Kong and Singapore”, Managerial Finance 31, No. 1, 52-77.<br />

Halkos, G.E. and Salamouris, D.S. (2004), “Efficiency Measurement of the Greek Commercial Banks with the Use<br />

of Financial Ratios: a Data Envelopment Analysis Approach”, Management accounting Research 15, No. 2,<br />

201-224.<br />

Isik, Ihsan and Kabir Hassan, M. (2003), “Financial Deregulation and Total Factor Productivity Change: An<br />

Empirical Study of Turkish Commercial Banks”, Journal of Banking and Finance 27, No. 8, 1455-1485.<br />

Mercan, M., Reisman A., Yolalan, R., and Emel, A.B. (2003), “The effect of Scale and Mode of Ownership on the<br />

Financial Performance of the Turkish Banking Sector: Results of a DEA-Based Analysis”, Socio-economic<br />

Planning Sciences 37, No. 3, 185-202.<br />

Neal, Penny (2004), “X-Efficiency and Productivity Change in Australian Banking”, Australian Economic Papers,<br />

Vol 43, Issue 2, 174-191.<br />

Paradi, J.C. and Schaffnit Claire (2004), “Commercial Branch Performance Evaluation and Results Communication<br />

in a Canadian Bank - a DEA Approach”, European Journal of Operational Research 156, No. 3, 719-735.<br />

Pille. P., and Paradi, J.C. (2002), “Financial Performance Analysis of Ontario (Canada) Credit Unions: An<br />

Application of DEA in the Regulatory Environment”, European Journal of Operational Research 139, No. 2,<br />

339-350.<br />

159


MODELING FUNCTIONAL INDICATORS FOR CUSTOMER PERFORMANCE IN REPAYING BANK<br />

LOAN (CASE STUDY: IRANIAN BANKS)<br />

Fariba Seyed Jafar Rangraz, Amirkabir University of Technology<br />

Jafar Pashami, Sharif University of Technology<br />

Email: RANGRAZFARIBa6384@gmail.com<br />

Abstract Nowadays, decision making has become an important issue in many subjects and areas including investment and banking.<br />

This is due to the unstable economic conditions and the unknown factors leading to people’s success. Developing countries face with<br />

limited capital. Hence, there is less attention to bank customers on the basis of scientific methods in order to develop a framework of<br />

project evaluation process. Therefore, there are lots of unfinished and untouched research projects in those countries including Iran.<br />

This paper aims to evaluate the money lending process based on Iranian banks customers. The 5C’s model is used to measure the<br />

relationship between each element of the model by taking into consider the factors leading to customer satisfaction. A quantitative<br />

method is applied to measure the findings. MLP using cross-validation and SVM are used in order to estimate the percentage of<br />

customer loan request that banks should agree and also predict customer reaction in repaying the loan. These methods show down to<br />

be more successful in estimating the first output (acceptable percentage of customer loan request) but for having better results for<br />

second output(predicting the costumer performance in repaying), the input feature must be changed.<br />

Key words: Banking, lending, repaying, 5C’s, MLP, SVM<br />

1. Introduction<br />

Decreasing risks and the descending possibility of not repaying are ways used by banks referred to as Computation<br />

technique. However, the unstable economy and external environmental factors force banks to review their method of<br />

decision making.<br />

K. Bryant has presented a model for evaluating agronomic loans (Bryant, k.,2001). He has used Duchessi<br />

research for designing his model. According to Bryant, it is necessary the lender knows 5 factors when consumer<br />

credit is being evaluated (Duchessi, P., 1995). These factors are: credit, capital, capacity, collateral and character.<br />

Bryant introduced an expert system for decision making (ALEES). In this system, quality factors such as skill,<br />

experience and the intelligence of the loan expert are keys while quantity factors are mixed.<br />

Rosman and others have studied different strategies that lenders use for confirming borrowers. They<br />

suggest that the structure of the lending process is in 2 parts: financial and nonfinancial. There are some features in<br />

each part (Rosman Andrea., and Bedard, Jean, 1999)<br />

Tansel and Yardakhl presented a model for evaluating decisions. This model is based on determining the<br />

credit of manufacturing firms that want to borrow from the Turkish banks (Yardakhl, M. , and Tansel, Y., 2004). In<br />

this model, all quality and quantity indexes related to the profitability of a company have been recognized. The<br />

obtained results can give a special score to the company using the AHP technique.<br />

Chan and Mak presented a method showing the benefits of selecting an improved manufacturing technique<br />

consisting of the following three approaches (Chan, F.T.S., and Mak, K.L. , 2000):<br />

� Strategic approach: achieving common goals<br />

� Economic approach: economic benefits<br />

� Analytical approach: economical and non-economic benefits and risks<br />

Reitan introduced efficient factors when evaluating loans. He estimated the ability of experts in determining the<br />

value and measuring factors of the project. He compared them based on the importance of each factor. He also<br />

concluded why certain evaluating processes are incorrect. He categorized 42 indexes into 6 groups. These are the<br />

Manager’s character and experience, services offered, market conditions, different pertinent aspects of the<br />

organization and financial matters (Reitan, Bjornar., 1998).<br />

160


In this paper, the factors affecting the lending decision making process has been studied. Different results due to<br />

various situations, features and abilities of the borrower have been attained.<br />

This information is categorized in 5 groups. The effective factors in bank decision making are named the 5C’s.<br />

They are (Tim Hill of COCC):<br />

� Character<br />

� Capacity<br />

� Condition<br />

� Capital<br />

� Collateral<br />

1.1. Character<br />

Character is the customer’s impact on the lender. The lender decides whether the customer is reliable to repay<br />

Therefore, the customer’s education and experience in that project would be surveyed. Also, an employee’s quality<br />

of performance and experience should be considered (Tim Hill of COCC).<br />

1.2. Capacity<br />

The capacity of the loan applicant refers to their imbursement ability and it is evaluated using expected cash flow.<br />

For those customers requesting short term loans, banks experts would pay attention to the method of payment such<br />

as cash. Other factors considered are the customer’s assets which can be transformed to cash immediately. Future<br />

cash flow and the company’s potential in honoring short time commitments are considered too. In contrast, clients,<br />

who extend long term loans, usually focus on long term assets. Other factors considered are the difference between<br />

the assets and debts, permanent activities, profitability and the company’s potential (Accounting organization of<br />

Iran, 2007).<br />

1.3. Capital<br />

Capital is equal to the net worth or net asset. Moreover, it is the reflection of the company’s success and value.<br />

Although this criterion can be sinister with considering variability of asset worth and official worth, official accounts<br />

may not show asset market worth. However inflation and increase in price level is the reason for this and sometimes<br />

account based asset value is less than market worth. But, considering worth based on asset official worth is not<br />

critical for the lender. Capital is the money which a client invests personally on a project and it is explanatory of<br />

corporation in loss if the firm is broken (SBA, North Catalina). Perhaps it’s best to use the word ‘bankrupt’ or<br />

‘folds’ instead of ‘broken’.<br />

1.4. Collateral<br />

Collateral is studied from a lending aspect and is one of the basic factors in the lending decision making process.<br />

The other notable role of collateral is being one of the effective elements in evaluating the lender and nominating his<br />

credit condition. Regardless of how much the employees of a bank are careful and cautious in lending; there<br />

sometimes would be problems in repayment. In this situation, the alternative solution is collateral. The Bank must<br />

decide what type of collateral is needed to recoup the loan. Property, a house or land, equipment or personal<br />

assurance may be required (Edward Poll,J.D.,M.B.A.,CMC,2003).<br />

2. Research method<br />

For evaluating indexes, a questionnaire was designed consisting 26 questions and distributed among 100 bank<br />

experts. Questions categorized in 5 groups like 5C’s indexes. In addition, some of the Iranian bank's experts were<br />

interviewed to support the information collected from questionnaires.<br />

The Likert scale was used to measure attitudes, preferences, and subjective reactions of respondents.<br />

Statistical sample size in this research consists of Iranian banks’ experts. Selected banks were some public and<br />

private banks. This selection helped to collect more comprehensive data sets.<br />

MLP using cross-validation and SVM were used in order to estimate the percentage of customer loan request that<br />

banks should agree and predict customer reaction in repaying the loan. MLP with one hidden layer and 3 neurons in<br />

the hidden layer and one neuron in the output layer was used. Then vectors had been normalized.<br />

161


26 features were defined base on 5C’s so the amount of weights is equal to (26*3)+3=81 .At least 10 times of data<br />

must be available in order to training and testing the network well. As there were 100 questions as input data, so<br />

Cross-Validation method was used in order to conquest deficiency of data. This method is used when enough data is<br />

not available and there is no possibility to allocate enough data for training and testing. Kfold method was used,<br />

each time 10 data were used for testing and 80 data were used for training. This activity was done for 90 data and 10<br />

last data were not given to network and they were used at the end for checking answers.<br />

2.1. Research hypothesize<br />

The main hypotheses of this research were:<br />

� Customer financial indexes are notable in bank expert decision making.”<br />

� Loan applicant character, capacity, capital, condition and collateral are effective in bank expert decision<br />

making.”<br />

3. Research result<br />

It is supposed that Y1 isthe percentage that bank accepts with costumer request amount for loan and Y2 is the<br />

category that costumers belong to. Likert scale was used for defining Y2. In this paper, in order to show more<br />

variance among groups 1, 3,5,7,9 were applied instead of 1, 2,3,4,5 numbers. So the accuracy became higher. Y2<br />

was defined like this:<br />

Y2=1 means that costumer do very badly in repaying.<br />

Y2=3 means that costumer do badly in repaying.<br />

Y2=5 means that costumer do neural in repaying.<br />

Y2=7 means that costumer do well in repaying.<br />

Y2=9 means that costumer do very well in repaying.<br />

The graph of training error related to first output (acceptable percentage of customer loan request) using MLP is<br />

shown in figure1:<br />

figure1: the graph of training error<br />

The testing error for the first output was ev1 =0.002.<br />

For the second output, the same network was used and the error was about 0.01. From this notable error, it can be<br />

concluded that all the features used for predicting the amount of loan that bank accepts are not appropriate to find<br />

the repayment trend. It is thought when a lender lends the loan, he should be sure of repaying trend and these are not<br />

independent. But in this research, data were incoherence and this ends to error. SVM network was used for<br />

classification too but there was an error about the previous error or some percentage lower. Then in order to find<br />

162


which features are more important and with which compound error can be decreased, the network was implemented<br />

using 1 to 7compounds of 26. But the error didn’t become better in any compound.<br />

As implementing program needed lots of times it was used for some of features, but other compounds of features<br />

may reduce error and end to best selection of features or even decide to have new features.<br />

Then an unsupervised networks, Kmeans was used for classification in order to show that these features end to label<br />

different with the second output data. This result was another confirm to previous discussion.<br />

4. Conclusion<br />

Attending MLP network, input features can estimate the first output (the percentage that banks accept to amount of<br />

customers’ request of loan) well and the error of training is equal to the error of test which is 0.002. Using SVM and<br />

MLP shows that these features for defining the second output (predicting the costumer performance in repaying)<br />

have a test error about 0.01 which is notable and in order to decreasing error, features should be changed and current<br />

features are not reliable for predicting the repaying trend.<br />

5. References<br />

Bryant, k.;"An agricultural loan evaluation expert system", Expert system with application, 2001, pp75-85.<br />

Duchessi, P.;"A knowledge engineered system for commercial loan decisions", Financial management, 1995, (17-3),<br />

pp 57-65.<br />

Rosman Andrea., and Bedard, Jean."Lenders Decision Strategies and loan structure decisions", Journal of Business<br />

Research, 1999, pp.83-94.<br />

Yardakhl, M. , and Tansel, Y. ;"AHP approach in the credit evaluation of manufacturing firms in Turkey'', Int. J. of<br />

production Economics, 2004(88), pp. 269-289.<br />

Chan, F.T.S., and Mak, K.L. ;"An Integrated Approach to Investment Appraisal for Advanced manufacturing<br />

technology". ,Human factors and Ergonomics in manufacturing,2000(19)<br />

Reitan, Bjornar.;" Criteria Used by Private Bank Officers to Evaluate Nea Ventures: An Analysis of Gaps and<br />

Shortcomings" http://www.sbaer.uca.edu/Research/1998/ICSB/n015.htm<br />

Tim Hill of COCC, "The 5C's getting money from a bank"<br />

Accounting organization, "Iranian accounting based theories", publishing committee of standards of Iran accounting,<br />

Tehran 2007<br />

"The 5C's of credit-small business resource", SBA(US Small Business administration)-North Catalina<br />

Edward Poll,J.D.,M.B.A.,CMC, " Understanding the four C's", Lawyers and bank loans ,law practice TODAY,<br />

september2003<br />

6. APPENDIXES<br />

Questionnaire:<br />

How much does each of these factors effect in tour decision to lend?<br />

For scoring factors, consider that:<br />

1= means that this item is the less important item from banks experts point of view for lending.<br />

3= means that this item is a few important item from banks experts point of view for lending.<br />

5= means that this item is a neural item from banks experts point of view for lending.<br />

7= means that this item is an important item from banks experts point of view for lending.<br />

9= means that this item is the most important item from banks experts point of view for lending.<br />

Character questions:<br />

Customer education: …………..<br />

Company activity in related industry: …………..<br />

Customer credit in industry: …………..<br />

Customer background in past loans and his repayment method in the past: …………..<br />

163


Commercial fame: …………..<br />

Loan repayment: …………..<br />

Loan application times: …………..<br />

Customer credit background in other banks: …………..<br />

Company management constancy: …………..<br />

Profitability: …………..<br />

Capacity questions:<br />

Company financial ability in repaying: …………..<br />

Company financial statements: …………..<br />

Fixed asset value: …………..<br />

Liquidity asset and cash flow: …………..<br />

Obtaining financial resource method: …………..<br />

Product variety: …………..<br />

Competitors: …………..<br />

Target market: …………..<br />

Capital questions:<br />

Account based asset value: …………..<br />

Company management ability and his background: …………..<br />

Condition questions<br />

Encountering national economical fluctuation: …………..<br />

Encountering international economical fluctuation: …………..<br />

Economic and politic society condition: …………..<br />

Place of related industry in society: …………..<br />

Collateral questions:<br />

Amount of customer collateral: …………..<br />

164


MEASURING AGREEMENT WITH THE WEIGHTED KAPPA COEFFICIENT OF CREDIT RATING<br />

DECISIONS: A CASE OF BANKING SECTOR<br />

Funda H. Sezgin and Ozlen Erkal, University of İstanbul, Turkey<br />

Email:hfundasezgin@yahoo.com; ozlenerkal@istanbul.edu.tr<br />

Abstract. An important reason for the global economic crisis affecting the whole world is large amount of loss caused by the banks<br />

which gone bankrupt although they have high level of credit scores by credit rating agencies. Now a days it is very important to<br />

measure agreement among decisions they made with each other since we distrust these agencies. In this study the weighted kappa<br />

coefficient is measured for agreement of credit rating decisions made for banks in member countries of the European Union. This<br />

coefficient is effective when number of level is large and especially significance level of disagreement among levels is not equal to<br />

each other. Credit rating agencies evaluate the same status independently. Weighted kappa coefficient is an extended kind of kappa<br />

coefficient measuring common classification of agreement between them.<br />

Keywords: Credit Rating, Weighted Kappa Coefficient, Banking Sector<br />

JEL classification: G21, C10, G32<br />

1 Introduction<br />

Together with the globalization process, the increasing complexity of financial markets and the variety of debtors,<br />

both investors and official authorities have begun to attach more importance to the concept of rating. An important<br />

reason that contributed to the importance of rating in international markets is that rating has started to be made use<br />

of in supervising and regulating financial markets. The decision of The Basel Committee on Banking Supervision<br />

under Bank for International Settlements (BIS) to use rating as a tool in regulating banks’ capital adequacy has been<br />

the most important evidence of the fact that rating is an integral part in markets and that it will continue to play an<br />

ever more important role.<br />

Rating as an expression of the credibility of a debt is becoming increasingly important for banks and their<br />

customers. In addition to political crises, financial crises that have a significant negative effect on a state’s stability<br />

and banking crises as a sub-branch have long brought to the agenda the idea of setting standards in banking and<br />

financial markets. The Basel Committee on Banking Supervision, an embodiment of this idea, aims to measure<br />

banks’ capital adequacy and their capacity to meet minimum agreed standards (Longstaff & Schwartz, 1995).<br />

The credit rating approaches for banks have brought about significant improvements to credit demand<br />

assessments following Basel II, which consisted of a framework of greater durability, stability and contribution<br />

compared to Basel II. Rating, playing an important role in the implementation of Basel II credit risk measurement<br />

approaches, has paved the way for a transition from a subjective crediting process based on expert opinion to an<br />

objective crediting process in which risk is quantified (Nickell et al., 2000).<br />

As an international standard, Basel II, which led to radical changes in banking activities and in their regulation<br />

and supervision with its scope and technical specifications, has inevitable deep impacts on banks, authorities on<br />

banking supervision and all parties related with banks (customers, rating firms, data validation firms, etc.) as well as<br />

on states’ economies.<br />

In this study, the agreement of decisions made by world’s leading credit rating agencies in rating EU banks will<br />

be measured using weighted kappa coefficient. The weighted kappa coefficient is a generalized version of the kappa<br />

coefficient which measures the classification agreement between different decision-makers who independently rate a<br />

same situation. Hence, the classification agreement between credit rating agencies will be measured. The reason for<br />

studying this field is the fact that the top-rated banks declared bankruptcy following the global economic crisis,<br />

which subsequently led to a decline in the credibility of and trust in credit rating agencies.<br />

165


2 The Concept of Rating<br />

Rating is a tool that makes life easier for markets and their beneficiaries by translating information acquired through<br />

costly and time-consuming hard work into simple symbols. Credit rating agencies use certain categories which are<br />

illustrated by means of easily comprehensible symbols such as letters, numbers or both. Rating is applicable not only<br />

for a country but also for a single bank or enterprise. Rating has been used in different areas, such as securities,<br />

commercial firms, financial institutions and banks. A summary of the benefits rating bring to economic life are<br />

1- It provides the economy with financial markets which show reliability and stability in their development,<br />

2- Provides outside sources for the economy and helps markets integrate with international markets,<br />

3- Classifies the general risk level in the economy, increases the efficiency of financial operations and finances<br />

development more effectively (Jarrow & Turnbull ,2000).<br />

The aim of establishing a rating system is to assess a firm’s risks more objectively. This not only helps to create<br />

a common language but also gets the banks to make very similar pricings for a firm.<br />

Rating, which is a standard and objective approach developed by professionals, helps define a debtor’s<br />

credibility and capacity to make regular payments, and assesses and affects his/her role in monetary and capital<br />

markets. Rating is made up of strong and useful symbols which help institutions, credit demanders/suppliers,<br />

investors, intermediaries and regulators in monetary and capital markets in their assessments (Nye and Eke, 2004).<br />

There are two types of ratings:<br />

External Ratings: External rating is an indicative credit rating given by rating agencies to relatively large-scale<br />

firms to allow them to borrow from capital markets.<br />

Internal Ratings: Internal ratings are the ratings given to debt demanders by banks in accordance with the banks’<br />

own internal rating criteria. In internal ratings, banks request their customers’ financial statements, official<br />

documents and all qualitative and quantitative credentials. The basic condition for such a system to run smoothly is<br />

an easy and standard access to chronologically-stored workable data (Lando, 1999).<br />

It was in the mid-1980s when developments in ratings gained impetus. The debt crisis of 1970s and the<br />

Mexican crisis of 1984 brought to the agenda the need for a faster and more effective form of rating in the<br />

international monetary system. Both the disappointing performances of the IMF and the World Bank, and a rapid<br />

development and diversification of the financial system brought the concept of rating to the fore. Especially,<br />

� Acceleration of capital movement and the formation of new investment opportunities,<br />

� Capital flows becoming an object of trade in financial markets,<br />

� An increased interest of a wide audience including small to large scale banks and bond holders in financial<br />

markets raised the importance of countrywide risk ratings.<br />

There has been an increased need for rating agencies in recent years. Classification of countries according to<br />

their risk levels provide an insight for investors as to what kind of risks they might encounter in the countries they<br />

plan to invest in.<br />

2.1 Credit Rating Agencies and Rating Methodologies<br />

Credit rating agencies are companies that assign credit ratings for borrowers’ capacity and willingness to fulfil their<br />

debt obligations. Ratings represent an independent opinion and cannot be qualified as advice for investment. Credit<br />

rating agencies can assign ratings for states, financial institutions, companies and various capital market issues. Such<br />

agencies are supervised by relevant state regulatory and supervisory bodies. They are subjected to supervision by<br />

Securities and Exchange Commission (SEC) in the USA and Capital Markets Board in Turkey. Agencies that meet<br />

certain criteria have been registered as “Nationally Recognized Statistical Rating Organization” (NRSRO) by the<br />

Securities and Exchange Commission (White, 2002).<br />

166


In 1975 SEC gave life to NRSRO in order to standardize the operations of rating agencies and their entries to<br />

the market. Currently, there are ten NRSROs:<br />

� A.M. Best Company, Inc<br />

� DBRS Ltd.<br />

� Egan-Jones Rating Company<br />

� Fitch, Inc.<br />

� Japan Credit Rating Agency, Ltd.<br />

� LACE Financial Corp.<br />

� Moody’s Investors Service, Inc.<br />

� Rating and Investment Information, Inc.<br />

� Realpoint LLC<br />

� Standard & Poor’s Ratings Services<br />

Credit score or rating score is the measurement of creditworthiness by a rating agency. In a broad definition,<br />

credit rating is an independent opinion about whether an organization can fulfil its financial obligations in time or<br />

not. Results are translated into symbols in order to make this opinion easily comprehensible. The symbols are<br />

universally applicable. A corporation or a state with the strongest capacity to fulfil its obligations is rated AAA.<br />

Grades from AAA to BBB are investment grades, while those lower than BBB are non-investment grades indicating<br />

riskiness. Below are the rating tables (Table 1, Table 2 and Table 3) used by credit rating agencies, such as S&P,<br />

Fitch and Moody’s.<br />

Credit ratings can be categorized as long- or short-term, and foreign or local currency. The data used in the<br />

study are based on banks’ long-term payment performances and their USD credit scores. These are taken from<br />

Reuters’ data on banks from December 2010. Banks rated by Fitch, S&P and Moody’s are used as observations<br />

units and the agencies’ agreement is measured in pairs with weighted kappa coefficient.<br />

2.1.1 Standard and Poor’s Rating Definitions<br />

Standard and Poor’s (S&P)’s grades fall into two broad categories: “Investment merit” and “Speculative”. There is<br />

a total of ten rating grades, four of which are under “Investment merit” with the remaining six under “Speculative”.<br />

AAA is the highest grade given by S&P to corporations with a capacity to pay their debt at ease. BBB and BB stand<br />

on a critical line. While BBB is investment merit, BB is speculative. The lowest grade D means that it is impossible<br />

for a corporation to pay off its debt and implies bankruptcy. S&P’s grading scale is in Table1.<br />

S&P Long-term Credit Ratings Scale<br />

Investment AAA Extremely strong capacity to meet financial commitments.<br />

AA Very strong capacity to meet financial commitments.<br />

A Strong capacity to meet financial commitments, but somewhat susceptible to adverse economic conditions and changes in<br />

circumstances.<br />

BBB Adequate capacity to meet financial commitments, but more subject to adverse economic conditions.<br />

Speculative BB Less vulnerable in the near-term compared to other speculative ratings but faces major ongoing uncertainties to adverse<br />

business, financial and economic conditions.<br />

B More vulnerable to adverse business, financial and economic conditions but currently has the capacity to meet financial<br />

commitments.<br />

CCC Currently vulnerable and dependent on favorable business, financial and economic conditions to meet financial<br />

commitments.<br />

CC Currently highly vulnerable.<br />

C Currently highly vulnerable obligations and other defined circumstances.<br />

D Payment default on financial commitments.<br />

2.1.2 Moody’s Investors Service Rating Definitions<br />

Table 1: S&P Credit Ratings Scale<br />

Long-term credit ratings of Moody’s Investors Service are opinions about relative credit risk of obligations of<br />

one year or more. Like S&P, Moody’s grades fall into two broad categories: “Investment merit” and “Speculative”.<br />

There is a total of nine rating grades, four of which are under “Investment merit” with the remaining five under<br />

“Speculative”. Moody’s Investors Service’s highest grade is Aaa, followed by Aa, A and Baa under “Investment<br />

167


merit”. Ba, B, Caa, Ca and C fall under “Speculative”. The lowest grade C indicates no prospect of debt<br />

reimbursement. Moody’s Investors Service’s grading scale is in Table 2.<br />

Moody’s Long-term Credit Ratings Scale<br />

Investment Aaa The highest capacity to meet obligations, with minimal credit risk.<br />

Aa High capacity to meet obligations and are subject to very low credit risk.<br />

A Upper-medium grade to meet obligations and are subject to low credit risk.<br />

Baa Moderate capacity to meet obligations, moderate credit risk, may possess certain speculative characteristics.<br />

Speculative Ba Speculative characteristics and substantial credit risk.<br />

B Speculative and high credit risk.<br />

Caa Poor standing, very high credit risk.<br />

Ca Highly speculative and likely in, or very near, default, with some prospect of recovery of principal and interest.<br />

C In default, with little prospect for recovery of principal or interest.<br />

2.1.3 Fitch Ratings’ Rating Definitions<br />

Table 2: Moody’s Investors Service Credit Ratings Scale<br />

In Fitch Ratings’ rating scale, there are four types of rating grades under “Investment merit” and eight under<br />

“Speculative”. The highest grade is AAA in the “Investment merit” category, while speculative DDD, DD and D<br />

grades indicate non-payment prospects. Compared to DD and D, DDD indicates a higher probability (90% - 100%)<br />

of payment and accrued interest collection. D indicates a low probability of payment. Fitch Ratings’ grading scale is<br />

in Table 3.<br />

Fitch Ratings Long-term Credit Ratings Scale<br />

Investment AAA Highest credit quality. ‘AAA’ ratings denote the lowest expectation of default<br />

risk. They are assigned only in cases of exceptionally strong capacity for<br />

payment of financial commitments. This capacity is highly unlikely to be<br />

adversely affected by foreseeable events.<br />

AA Very high credit quality. ‘AA’ ratings denote expectations of very low default<br />

risk. They indicate very strong capacity for payment of financial commitments.<br />

This capacity is not significantly vulnerable to foreseeable events.<br />

A High credit quality. ‘A’ ratings denote expectations of low default risk. The<br />

capacity for payment of financial commitments is considered strong. This<br />

capacity may, nevertheless, be more vulnerable to adverse business or economic<br />

conditions. Possesses risky characteristics.<br />

BBB Good credit quality. ‘BBB’ ratings indicate that expectations of default risk are<br />

currently low. The capacity for payment of financial commitments is considered<br />

adequate but adverse business or economic conditions are more likely to impair<br />

this capacity. ‘BBB’ is the lowest investment category.<br />

Speculative BB Speculative. ‘BB’ ratings indicate an elevated vulnerability to default risk,<br />

particularly in the event of adverse changes in business or economic conditions<br />

over time; however, business or financial flexibility exists which supports the<br />

servicing of financial commitments. BB rated bonds and bills are not considered<br />

investment merit.<br />

B Highly speculative. ‘B’ ratings indicate that material default risk is present, but a<br />

limited margin of safety remains. Financial commitments are currently being<br />

met; however, capacity for continued payment is vulnerable to deterioration in<br />

the business and economic environment.<br />

CCC, CC, C Default is highly probable. May not be able to meet financial obligations.<br />

Capacity to fulfil obligations depends on economic conditions over time. ‘CC’<br />

indicates default of some kind appears probable, while ‘C’ indicates default is<br />

imminent or inevitable.<br />

DDD, DD, D Default. The ratings of obligations in this category are based on their prospects<br />

for achieving partial or full recovery in a reorganization or liquidation of the<br />

obligor. While expected recovery values are highly speculative and cannot be<br />

estimated with any precision, the following serve as general guidelines. ‘DDD’<br />

obligations have the highest potential for recovery, around 90% - 100% of<br />

outstanding amounts and accrued interest. ‘DD’ indicates potential recoveries in<br />

the range of 50% - 90% and 'D' the lowest recovery potential below 50%.<br />

Table 3: Moody’s Investors Service Credit Ratings Scale<br />

168


3 Weighted Kappa Coefficient<br />

Often some disagreements between the two raters can be considered as more important than others. For example,<br />

disagreement on two distant categories should be considered more important than on neighbouring categories on an<br />

ordinal scale. For this reason, Cohen (1968) introduced the weighted kappa coefficient. Agreement (ωjk ) or<br />

disagreement (νjk ) weights are a priori distributed in the K 2 cells of the K x K contingency table (Vanbelle & Albert,<br />

2009). The weighted kappa coefficient is defined in terms of agreement weights;<br />

With<br />

K K<br />

^<br />

K<br />

p � p<br />

� � �<br />

ow jk kj<br />

j �1 k �1<br />

It can also be defined with disagreement weights,<br />

With q v p<br />

^<br />

K<br />

w<br />

w<br />

p � p<br />

�<br />

1�<br />

p<br />

and<br />

q<br />

� 1 �<br />

q<br />

ow ew<br />

ew<br />

K K<br />

p � p p<br />

� � �<br />

ew jk j . . k<br />

j �1 k �1<br />

K K<br />

ow � �� jk jk and<br />

K K<br />

qew � �� v jk p j. p.<br />

k<br />

j�1 k �1<br />

j�1 k �1<br />

o w<br />

e w<br />

(0≤ νjk≤1 and νjj=1).<br />

(0≤ ωjk≤1 and ωjj=1)<br />

Although weights can be arbitrarily defined, two weighting schemes are most commonly used. These are the<br />

“linear” weights introduced by Cicchetti and Allison (1971),<br />

j � k<br />

� jk �1 �<br />

K �1<br />

and the quadratic weights introduced by Fleiss & Cohen (1973),<br />

�<br />

jk<br />

� j � k �<br />

�1 � � �<br />

� K �1<br />

�<br />

Note that the disagreement weights νjk = (j - k) 2 are also commonly used (Ludbrook, 2002; Agresti, 1992) and<br />

that Cohen's kappa coefficient is a particular case of the weighted kappa coefficient where ωjk=1 when j=k and<br />

ωjk=0 otherwise (Ben-David, 2008).<br />

Weighted kappa coefficient’s value is between -1 and +1. Where weighted kappa coefficient =1, two decision<br />

makers are in “perfect agreement”. When observed agreement equals expected agreement, then weighted kappa<br />

coefficient=0 and the decision makers are in “random agreement”. When weighted kappa coefficient =-1, the two<br />

decision makers are in “perfect disagreement” (Fleiss et. al., 2003).<br />

Randomly selected ratings used in interpreting k statistical value are similarly used in interpreting weighted<br />

kappa coefficient statistical value that corresponds to the level of agreement between decision makers. Accordingly,<br />

values higher than 0.75 are interpreted as “perfect agreement” beyond expected agreement, values less than 0.40 as<br />

“slight agreement” beyond expected agreement, and values between 0.40 and 0.75 as “slight to good agreement”<br />

beyond expected agreement (Landis & Koch, 1977).<br />

2<br />

169


4 Empirical Analysis<br />

In the study, the agreement among common assessment decisions made by credit rating agencies for the top-ten<br />

banks (in assets size) in EU member states is measured by the weighted kappa coefficient. A total of six categories<br />

were selected (AAA, AA, A, BBB, BB and B). Grade units lower than B were not taken into account. For top-rated<br />

banks declared bankruptcy which resulted in loss of credibility and trust for credit rating agencies following the<br />

financial crisis, banks rated B and higher were given priority in this study, which takes into account the credit scores<br />

given in 2009-2010. The reason here is to explore the recent agreement between credit rating agencies who adopted<br />

stricter policies following the post-crisis regulations in the banking sector in many countries. Banks with lower<br />

grades were not included in the sampling. The weighted kappa coefficient to be used in all applications in order to<br />

measure the agreement among rating agencies was calculated using both linear and quadratic weighting schemes.<br />

The statistical values for kappa, weighted linear kappa and weighted quadratic kappa were determined before the<br />

analysis. For kappa coefficient was low (0.689), it was decided that weighted kappa coefficient (linear=0.925 and<br />

quadratic=0.891) would be suitable for use.<br />

Initially, credit rating agencies “S&P” and “Moody’s” were employed in the study as the first pair of decision<br />

makers. The categorical distribution for the 200 banks rated by both of the agencies is shown in Table 4.<br />

Moody's<br />

AAA AA A BBB BB B Total<br />

AAA 2 0 0 2 0 0 4<br />

S&P AA 2 15 10 10 0 0 37<br />

A 3 10 33 1 0 0 47<br />

BBB 0 0 1 20 10 0 31<br />

BB 0 0 0 2 45 2 49<br />

B 0 0 0 0 0 32 32<br />

Total 7 25 44 35 55 34 200<br />

Table 4: S&P and Moody’s Results<br />

Linear and quadratic weights are shown in Table 5 and Table 6.<br />

Moody's<br />

AAA AA A BBB BB B<br />

AAA 1 0.75 0.56 0.32 0.3 0<br />

S&P AA 0.75 1 0.75 0.56 0.3 0.28<br />

A 0.56 0.75 1 0.75 0.6 0.32<br />

BBB 0.32 0.56 0.75 1 0.8 0.56<br />

BB 0.28 0.32 0.56 0.75 1 0.75<br />

B 0 0.28 0.32 0.56 0.8 1<br />

Table 5: Lineer Weights Results<br />

Moody's<br />

AAA AA A BBB BB B<br />

AAA 1 0.89 0.85 0.72 0.4 0<br />

S&P AA 0.89 1 0.89 0.85 0.7 0.43<br />

A 0.85 0.89 1 0.89 0.9 0.72<br />

BBB 0.72 0.85 0.89 1 0.9 0.85<br />

BB 0.43 0.72 0.85 0.89 1 0.89<br />

B 0 0.43 0.72 0.85 0.9 1<br />

Table 6: Quadratic Weights Results<br />

170


The observed and expected ratios calculated with linear weighting schemes are 0.965 and 0.923 respectively.<br />

With quadratic weighting schemes these are 0.991 and 0.973 respectively. The weighted agreement exceeding the<br />

expected weighted agreement among the relevant credit rating agencies is calculated 82.78 % with linear weights<br />

and 92.76 % with quadratic weights.<br />

Credit rating agencies “Moody’s” and “Fitch” were employed in the study as the second pair of decision<br />

makers. The categorical distribution for the 185 banks rated by both of the agencies is shown in Table 7.<br />

Moody's<br />

AAA AA A BBB BB B Total<br />

AAA 5 1 0 0 0 0 6<br />

Fitch AA 1 15 5 6 0 0 27<br />

A 2 10 16 1 0 0 29<br />

BBB 0 0 1 38 8 0 47<br />

BB 0 0 0 5 33 5 43<br />

B 0 0 0 0 0 33 33<br />

Total 8 26 22 50 41 38 185<br />

Table 7: Fitch and Moody’s Results<br />

The observed and expected ratios calculated with linear weighting schemes are 0.927 and 0.875 respectively.<br />

With quadratic weighting schemes these are 0.972 and 0.916 respectively. The weighted agreement exceeding the<br />

expected weighted agreement among the relevant credit rating agencies is calculated 79.32 % with linear weights<br />

and 88.01 % with quadratic weights.<br />

Credit rating agencies “S&P” and “Fitch” were employed in the study as the third pair of decision makers. The<br />

categorical distribution for the 194 banks rated by both of the agencies is shown in Table 8.<br />

S&P<br />

AAA AA A BBB BB B Total<br />

AAA 2 1 0 0 0 0 3<br />

Fitch AA 2 9 3 3 0 0 17<br />

A 2 8 14 2 0 0 26<br />

BBB 0 0 2 32 4 0 38<br />

BB 0 0 0 3 31 4 38<br />

B 0 0 0 0 15 57 72<br />

Total 6 18 19 40 50 61 194<br />

Table 8: S&P and Fitch Results<br />

The observed and expected ratios calculated with linear weighting schemes are 0.940 and 0.896 respectively.<br />

With quadratic weighting schemes these are 0.982 and 0.950 respectively. The weighted agreement exceeding the<br />

expected weighted agreement among the relevant credit rating agencies is calculated 88.21% with linear weights and<br />

91.01 % with quadratic weights.<br />

As a result of the abovementioned comparisons S&P and Fitch were identified as the two agencies with the<br />

greatest agreement according to the values of the weighted kappa coefficient calculated using both linear and<br />

quadratic weighting. Using the same method, the study concludes that S&P and Moody’s are the two rating agencies<br />

with the lowest agreement. In light of the degrees of agreement used as agreement measure and considered fit in the<br />

application, degrees of agreement exceeding the expected level of agreement is quite high among the three leading<br />

credit rating agencies in the sector. Hence, it wouldn’t be wrong to count on the credit scores given by any of these<br />

three agencies.<br />

171


5 Conclusion<br />

Despite various criticisms regarding rating agencies’ increased importance for national and international markets,<br />

the importance of rating agencies for market players is great. As a result of globalization together with the<br />

integration of financial markets, the influence of rating on markets is rising day by day. Markets acknowledge that<br />

rating agencies are successful in rating risks. On the other hand, economic crises and unexpected companies<br />

declaring bankruptcy pose questions about performances of rating agencies. Several studies revealing very different<br />

results were conducted to see whether rating agencies have any information value for markets or not.<br />

During the Basel II process, credit rating agencies gained ever more importance. Basel II’ requirement for<br />

internal and external audit in banks indicates that the need for credit rating agencies will increase in the future.<br />

For classification of units is a complicated process, discovering which decision maker is more reliable than the<br />

other poses a more important concern. Hence, classifying a unit sample from two or more decision makers<br />

independently and identifying the agreement and meaningfulness of this operation become a matter of interest.<br />

When the number of levels increase and the significance of disagreement among levels is not equal, it is more<br />

appropriate to use the expanded weighted kappa coefficient statistic. Looking at the global economic crisis and<br />

banks declaring bankruptcy despite being given the top credit scores by credit rating agencies, the study aims to<br />

measure the agreement among credit rating agencies.<br />

Data from EU banks rated by credit rating agencies were compared in pairs. The subsequent levels of agreement<br />

correspond to high agreement values reached in the application. Hence, as a result of the application, it was<br />

concluded that the credit scores given by credit rating agencies are applicable. In other words, these agencies share<br />

an agreement. While there are high levels of agreement between all credit rating agencies, agreement is calculated<br />

the highest between S&P and Fitch, and the lowest between S&P and Moody’s.<br />

6 References<br />

Agresti, A. (1992). Modelling Patterns of Agreement and Disagreement. Statistical Methods in Medical Research,<br />

1(2), 201-218.<br />

Ben-David, A. (2008). Comparison of Classification Accuracy Using Cohen’s Weighted Kappa, Expert Systems<br />

With Applications, 34(3), 825-832.<br />

Cicchetti, D. V.& Allison, T. (1971). A New Procedure for Assessing Reliability of Scoring EEG Sleep Recordings.<br />

American Journal of EEG Technology, 11(2), 101-109.<br />

Cohen, J. (1968). Weighted Kappa: Nominal Scale Agreement with Provision for Scaled Disagreement or Partial<br />

Credit. Psychological Bulletin, 70(1), 213-220.<br />

Fleiss, J. L. & Cohen, J.(1973). The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as<br />

Measure of Reliability. Educational and Psychological Measurement, 33(2), 613-619.<br />

Fleiss, J. L., Levin, B., & Paik, M. C. (2003). The Measurement of Interrater Agreement. Statistical Methods for<br />

Rates and Proportions 3rd ed., John Wiley & Sons. Inc., New-Jersey.<br />

Jarrow, R. & Turnbull, S. (2000). The Intersection of Market and Credit Risk. Journal of Banking &Finance 24(2),<br />

271-299.<br />

Landis, J. R., & Koch, G. G. (1977). An Application of Hierarchical Kappa-type Statistics in the Assessment of<br />

Majority Agreement Among Multiple Observers. Biometrics, 33(3), 363-374.<br />

Lando, D. (1999). Some Elements of Rating-Based Credit Risk Modeling. Working Paper, No:123 Department of<br />

Operations Research, University of Copenhagen.<br />

Ludbrook, J., (2002). Statistical Techniques for Comparing Measurers and Methods of Measurement: a Critical<br />

Review,.Clinical and Experimental Pharmacology and Physiology, 29(1), 527-536.<br />

Longstaff, F. A., & Schwartz, E. S. (1995). Valuing Credit Derivatives. Journal of Fixed Income, 5(1), 6-12.<br />

Nickell, P., Perraudin, W., & Varotto, S. (2000). Stability of Rating Transitions. Journal of Banking & Finance ,<br />

24(1), 205-229.<br />

Nye, R. P., Eke S. (2004). Türkiye’de Kredi Derecelendirmesi, Activeline, 30-46.<br />

White, L. J. (2002). The Credit Rating Industry: an Industrial Organization Analysis. Ratings Rating Agencies and<br />

the Global Financial System 1st ed., Kluwer Academic Publishers, New-York.<br />

Vanbelle, S. & Albert, A. (2009). A Note on the Linearly Weighted Kappa Coefficient for Ordinal Scales. Statistical<br />

Methodology, 6(3), 157-163.<br />

172


THE GLOBAL FINANCIAL CRISIS AND THE BANKING SECTOR IN SERBIA�<br />

Emilija Vuksanović, PhD<br />

Violeta Todorović, M.A.<br />

Abstract. Financial crises in general are a common part of the development of the banking systems in the modern times. The modern<br />

times and environment can be defined as the complex, dynamic, heterogeneous and unpredictable. The biggest crisis of the 21 st<br />

century (the "Subprime" crisis) spread very quickly to include the real economy and encompassed the whole world. Although it is not<br />

yet possible to evaluate completely the scope of the crisis which started in June 2007 on the real estate market in the USA, it is<br />

obvious that the global connections influenced its spreading to the financial markets and economy of other continents. In reality the<br />

market globalization stipulated the globalization of the financial crisis and the recession of the economy. The Serbian banking sector<br />

was not immune to the global financial crisis either. The first blow of the crisis was felt in the banks in October 2008, when there was<br />

a large-scale withdrawal of the deposits and the growth of the banks` interest rates. However, if compared with the neighbouring<br />

countries, the Serbian banking sector met the crisis adequately capitalized and highly solvent, owing to the anti-cyclic monetary policy<br />

and the undertaken prudential measures by the National Bank of Serbia. The aim of this paper is the analysis of the state of the<br />

banking system in Serbia in the circumstances of the Global financial crisis and its future prospective in the circumstances of the<br />

increased instability and turbulence.<br />

Key words: financial crisis, banking sector, regulatory measures, capital adequacy<br />

JEL classification: G2<br />

1. Introduction<br />

The first major financial crisis in 21st century that includes "esoteric instruments, unaware regulators, and skittish<br />

investors“ (Reinhart and Rogoff, 2009, p. 291), has spread rapidly to real economy and grasped the whole world.<br />

Presence of global connection caused the far-reaching consequences of current financial crisis on world economy<br />

and finance. This caused global recession followed by the drop in living standard, increase of unemployment and<br />

inflation and growing budgetary and foreign trade deficit. Naturally, Serbian banking sector was not immune on the<br />

effects of global financial crisis.<br />

The subject of this analysis will be the intensity and the character of world trends’ influence on banking system<br />

in Serbia, on one hand and on the other the effectiveness of national financial regulations as a response to crisis<br />

effects. Since the banks are the bearers of financial system in Serbia, it is logical to expect the key effects of the<br />

financial crisis to reflect primarily on banking sector, and the fact that this sector is slightly more developed in<br />

comparison to other sectors in Serbia, leads to the presumption that precisely this sector will be crucial in<br />

stabilization and further growth of Serbian economy. The response of national financial regulations to the effects of<br />

global financial crisis was based precisely on these facts.<br />

In order to establish to what extent and in which way banks were exposed to the effects of global financial crisis<br />

this paper will first of all, analyse the characteristics of Serbia's banking sector in pre-crisis period. Further analysis<br />

deals with determining concrete effects of banking crisis and effectiveness of the regulations’ response. The analysis<br />

will be rounded up by determined open questions and estimates of the perspective of further development.<br />

2. Characteristics of Serbian Banking Sector in Pre-crisis Period<br />

During the 90s of the last century, Serbian banking sector was practically ruined, First of all, because of the<br />

conditions under which banks operated (political instability, high inflation rate, economic isolation, the loss of<br />

foreign exchange savings and the fact that citizens completely lost trust in banking sector). Such state of the<br />

banking sector was in no way sustainable, which called for starting mechanisms for its reconstruction.<br />

After the reform of banking system was conducted, analyses showed that transformation and reconstruction<br />

processes initiated in 2001 brought about significant positive effects in the banking sector:<br />

- the structure of the banking sector was enhanced (on one hand, overall number of banks was reduced from<br />

86 in 2001, to 49 by the end of the year; also the number of state-owned banks reduced, while on the other hand, the<br />

number of private banks owned by either foreign or domestic private persons increased)<br />

� The paper is the result of the work on project funded by the Ministry of Science and Technology of the Serbia Republic, No. 47004 (2011-<br />

2014) titled: Upgrading the public policy in Serbia in function of improvement of the social security of citizens and sustainable economic growth.<br />

173


- the losses of entire banking sector have been reduced (by the beginning of 2001, the losses were 48 billion<br />

RSD, and already by the middle of 2002 they dropped to 11 billion RSD);<br />

- the trust of citizens was regained, viewed from the aspect of savings, especially foreign exchange savings;<br />

- the share of capital in total liabilities has increased (by the end of 2000 the share of capital was only 3,4%<br />

of total liabilities, while by the end of 2001 it amounted to 46185,7 million dinars, in other words, its share in total<br />

liabilities increased to 15,9%);<br />

- improved compliance of banks’ business operations from the aspect of relative business indicators, as<br />

shown in table 1.<br />

Indicators 31. 12. 2001. 31. 12. 2000.<br />

Capital adequacy ratio (min. 8%) 21,9 0,7<br />

Short-term placement to resources ratio (>=100%) 138,4 138,0<br />

Foreign currency liabilities to foreign currency assets ratio (95-105%) 94,3 96<br />

Large and largest possible exposure ratio (max 80%) 233,3 3928,9<br />

Share in company capital ratio (max 15%) 6,0 17,9<br />

Ratio of the share in capital of banks and other financial institutions (max<br />

51%)<br />

5,2 33,3<br />

Ratio of the placements in capital assets (max 20%) 33,1 149,8<br />

Table 1 – Relative Business Indicators<br />

Source: Annual Report 2001, National Bank of Yugoslavia, p. 90. http://www.nbs.rs/internet/latinica/90/index.html<br />

Based on the presented table, a conclusion can be drawn that in comparison to 2000, compliance of banks’<br />

operation was significantly improved in terms of capital adequacy, large and largest possible exposures, share in<br />

company capital and placement in capital assets.<br />

The process of privatization of banks in which state had major or minor share started in 2004. Existing<br />

ownership structure in Serbian banks, the dominant part being taken by foreign ownership, represents a significant<br />

indicator of internalization of banking sector. The process of internalization was especially intensified in the period<br />

2004-2007, when the number of domestic (private and state-owned) banks reduced markedly in comparison to the<br />

foreign banks, from the aspect of number of banks, capital and total assets. By end-2009, total number of banks<br />

operating in Serbia reduced to 34, in 20 of which foreign stakeholders have majority shares, four are owned by<br />

domestic natural or legal persons, while the Republic of Serbia has the majority of shares in ten banks. Based on the<br />

current state, it can be concluded that majority of Serbian banking sector is in foreign ownership. However, when<br />

compared to other countries in transition process (countries of South-East Europe, which includes Serbia as well),<br />

the level of internationalization of Serbian banking sector is much lower.<br />

In terms of market share, Serbian banking sector is still quite fragmented, that is to say, insufficiently focused,<br />

which is obvious from the fact that even 18 banks have a market share less than 2% (measured share of the assets of<br />

a bank in relation to total assets of entire banking sector). By the end of Q4 of 2009 only one bank - Intesa had<br />

market share above 10%, to be precise 14,25%. 1 Therefore, market structure, as a relevant factor in the development<br />

of banking market, in combination with competition, represents the factor of efficiency and stability growth of<br />

banking system and economic development of every country. That is why it can be expected that the<br />

internationalization of Serbian banking system will continue through mergers and affiliation, for the purpose of<br />

boosting the capital, improving competitiveness and cutting down borrowing costs.<br />

Clearing the balance and privatization of state-owned banks created prerequisites for a more efficient and more<br />

profitable business operation of banking sector.<br />

ROE and ROA indicators (traditionally the most relevant indicators for measuring profitability) have increased<br />

exponentially in only few years. The highest value of ROA was achieved in 2008, despite the fact that global<br />

financial crisis had already shaken the banking sector. However, it should be noted that in due to the crisis, in Q3 of<br />

2009, ROA marked a drop of 0,66% in comparison to the previous quarter. What is concerning is the drop of ROA<br />

by the end of 2009 was 1,02% (by the end of 2008 ROA was 2,08%). Return on equity (ROE) has also recorded a<br />

continual rise in the initial years of transition, while it peaked in 2006, only to suffer a slight drop the following year<br />

due to recapitalization of banks. 2 After that ROE went up again (by the end of 2008 ROE was 9,28%), but global<br />

1 Bank Supervision - Fourth Quarter Report 2009, National Bank of Serbia’s Bank Supervision Department,<br />

http://www.nbs.rs/export/internet/cirilica/55/55_4/index.html<br />

2 Banks have performed a recapitalization due to the NBS directive on gross placement ratio for the citizens according to the total capital which<br />

could not exceed the maximum of 150% at the time. With maximum ratio achieved, banks reacted by increasing the capital in order to maintain<br />

and increase the lending volume for citizens. This way a space for additional lending of the citizens was created. When the crisis set in, NBS<br />

increased margin of this ratio to 200% as a part of simulative measures for overcoming the crisis, which created even more space for lending.<br />

174


financial crisis caused it to drop (by the end of 2009, it dropped to 4,62%). The drop in values of both indicators is<br />

the result of the decrease of profit before tax by 42,3% in comparison to the same period in 2008. 3<br />

From the viewpoint of indicators of financial position and strength, Serbian banking system recorded a slight<br />

nominal and real interannual increase in 2009 (table 2). In spite of unfavourable macroeconomic trends caused by<br />

global financial crisis and weaknesses of internal financial system (pressure of inflation, Dinar depreciation), it is<br />

evident that the stability of Serbian banking system was maintained in 2009. Capital adequacy ratio is still high<br />

(21,44% by the end of 2009), which is a result of good risk management in banks and conservative regulatory policy<br />

of National Bank of Serbia. Therefore, trend of reducing total liabilities in liability structure was continued.<br />

On the other hand, Serbian banking system has a stable funding source structure, since the share of domestic<br />

deposits in total liabilities is approximately 70%. In combination with high amount of the capital, it is clear that<br />

balance structure of banks in Serbia is much more favourable than that of most banks in Europe. Also banking sector<br />

in Serbia has adequate currency structure of deposits (75,4% of total deposits are in foreign currencies). However,<br />

maturity structure is not satisfactory, due to dominant share of short-term deposits (95,7%) in total deposits. That is<br />

why there is a big maturity incoherence between the source and placement (long-term placements are up to 5 times<br />

higher than long-term sources). 4<br />

Indicator 31<br />

31<br />

31 Growth index for 2008 3/2 Growth index for 20094/3<br />

.12.2007. .12.2008. .12.2009.<br />

1 2 3 4 5 6<br />

Balance sheet total 15<br />

17<br />

21<br />

114 122<br />

61,8 76,9 60,4<br />

Total capital 32<br />

41<br />

44<br />

128 107<br />

8,5 9.9 7,5<br />

Lending activity 76<br />

0,9<br />

Deposit activity 96<br />

0,2<br />

10<br />

27,6<br />

10<br />

24,7<br />

12<br />

78,3<br />

13<br />

01,2<br />

135 124<br />

107 127<br />

Table 2 – Indicators of financial position and strength (in billions RSD)<br />

Source: Bank Supervision – Report for Fourth Quarter 2009, National Bank of Serbia’s Supervision Department,<br />

http://www.nbs.rs/export/internet/cirilica/55/55_4/index.html<br />

In addition, structural imbalances between banks in terms of financial strength and business results represent a<br />

challenge for further development of banking sector. There is a small number of leading banks on Serbian banking<br />

market and a comparatively large number of banks with modest business results. Apart from that, there are also<br />

banks with high losses in their operation. These differences are expected to become more visible as the competition<br />

grows stronger. 5<br />

Based on the previous analysis it can be concluded that foreign direct investments in Serbian banking system<br />

and appearance of foreign banks brought about a more competitive approach from the banks and increased available<br />

funding sources. In the period 2001-2006 over 20 billion dollars flowed in Serbia through remittances of emigrants,<br />

foreign investments, donations, and loans, including privatization and borrowings of the companies and banks<br />

themselves. (Kovačić, 2006, p. 59) This created conditions for a rapid, dynamic and stable growth of banking sector<br />

in an economic environment characterised by slow growth, liquidation of numerous companies and poverty.<br />

However, when compared to other countries in the region, especially to new EU members and candidates for<br />

ascension, development of Serbian banking system is still pretty modest. Share of total liabilities in domestic<br />

product is lower than in mentioned countries. Such situation is only natural, since the transition process in Serbia<br />

starts in 2001, that is to say, over ten years later than in other European countries that are in the transition process.<br />

Still, general state of banking system in Serbia is much better than economy sector, which is clearly seen in the<br />

results of business operation, growth of balance assets, changes in legal regulations, compliance with international<br />

standards in calculation of business results and risk evaluation.<br />

3 Bank Supervision – Fourth Quarter Reportl 2009, National Bank of Serbia’s Bank Supervision Department,<br />

http://www.nbs.rs/export/internet/cirilica/55/55_4/index.html<br />

4 Ibid.<br />

5 Serbian banking sector 2009 – Analysis of fiunancial position and financial results, Association of Serbian Banks, p. 13, http://www.ubs-<br />

asb.com/Portals/0/vesti/76/Analiza2009.pdf<br />

175


Generally speaking, restructuring of banks, conducted with active role of the state, has created a pretty stable<br />

banking sector, which even in crisis situation maintained a good level of profitability, adequate capitalization and<br />

corporative performance. However, the problem yet unsolved are high interest margins, due to lack of competition.<br />

This makes it more difficult for the economy sector to finance the investments through borrowing. This is why a<br />

more intense and more subtle competitive battle on banking market can be expected to the purpose of keeping<br />

existing and attracting new clients. Basic strategic goal of future development of banking sector in Serbia is<br />

improved efficiency in business, that is to say, reduction of financial intermediation costs. 6<br />

3. The impact of the crisis on Serbian bank sector and the regulations' response<br />

The first impact of economic crisis Serbian banks felt in October 2008 when in just a month and a half, in panic<br />

attack, the foreign currency savings were withdrawn in the amount of 960 million Euros. At the same time, the<br />

domestic banks with foreign ownership started prematurely to return credit lines to their parent banks, which<br />

represented additional factor for reducing the foreign currency offer in Interbank Market, and consequently,<br />

depreciation of dinar followed.<br />

To increase the foreign currency offer, the National Bank of Serbia took the following measurements:<br />

1)introduced subsidies on new borrowings abroad, through changes of regulations regarding the required reserves,<br />

and provided that new borrowing is not burdened by allocation of required reserve, 2) two times increased the<br />

amount of foreign currency required reserve that banks calculate on foreign currency base (but keep in dinars on<br />

bank account)- at first with 10% to 20%, and then 20% to 40% of allocated liabilities, 7 and 3) „froze" the absolute<br />

amount of allocated required reserve based on September 2008 in order to prevent prepayment in foreign banks.<br />

This was a disincentive for banks not to return loans prematurely to their parent banks, because their allocated<br />

reserves are not returnable according to the basis of required reserve. It is significant that this measurement didn't<br />

explicitly forbid banks to return the loans to their parent banks, but it was an indirect disincentive. Also, this<br />

measurement was only temporary. (Kokotović, 2008, p. 71)<br />

All these measurements, with everyday interventions of the National Bank of Serbia on Interbank Market and<br />

with 17.75 % increased key policy rate didn’t stop depreciation of dinar.<br />

Stopping cross-border loans, saving retreats and high interest rates on indexed loans, made a big impact on loan<br />

declination in Serbia in the last quarter of 2008However, during 2009 the increase of bank lending followed. The<br />

total amount of placed loans at the end of 2009 was 1.278,3 billion of dinars which is 24.4% more than at the and of<br />

2008, when the total amount of placed loans was 1027,6 billion RSD. It was approved 750,4 billions RSD to the<br />

corporate lending, to the household 395 billion RSD, public sector 118 billion RSD, of which 106.7 billion RSD is<br />

state debt. 8<br />

At the same time, there was the increase in non-performing loans. Based on data of non-performing loans that<br />

banks submit to the National Bank of Serbia, at the end of 2009, their percentage in total of approved loans, on net<br />

level, was 8.53% or 99 billion RSD. Otherwise, total credit indebtedness of companies and citizens in Serbia at the<br />

end of 2009 was 1.400 billion RSD, which is 13.3% more than at the end of 2008. 9<br />

Thanks to the required reserves high rates policy towards external indebtedness and to domestic and foreign<br />

currency deposits, the Serbian bank sector in advanced stages of economic crisis showed the coverage of the foreign<br />

currency reserve deposit up to 86%, which is much higher than the indicators in other countries in the region, where<br />

it was 35%. 10 However, comparing to the neighbouring countries, Serbia has the highest allocation rates of required<br />

reserves (table 3), which led to lower efficiency and profitability of Serbian banks.<br />

state Required reserve rate<br />

Serbia During 2010- 5% on dinar savings, 40% on citizens’ foreign currency savings, and 45% on<br />

6<br />

Basic parameters of the efficiency of banks in Serbia have high negative values. Costs of intermediation (net interest margin) are much higher<br />

than in developed countries. According to: Boško Živković, Financial Sector - Banking, within the study: Reforms in Serbia: Achievements and<br />

Challenges, Center for liberal-democratic studies, Belgrade, June 2008, p. 41.<br />

7<br />

This is a temporary measure, adopted with the aim to help surplus of foreign liquidity remains in Serbian banking sector, without using it for<br />

settlements with banks abroad or for reduction of foreign deposits.<br />

8<br />

Bank Supervision – Report for Fourth Quarter 2009, National Bank of Serbia’s Supervision Department,<br />

http://www.nbs.rs/export/internet/cirilica/55/55_4/index.html<br />

9<br />

Banking Sector in Serbia 2009-2010, Business Magazine, 31.01. 2010, http://www.naslovi.net/2010-01-31/poslovni-magazin/bankarski-sektoru-srbiji-2009-2010/1521472<br />

10<br />

Financial Stability report, National Bank of Serbia, September 2008, p. 43. http://www.nbs.rs/internet/latinica/90/index.html<br />

176


Croatia<br />

corporate foreign currency deposits<br />

2011 plans- unique 25% foreign currency required reserve and 5% dinar required reserve<br />

13%<br />

Bosnia and<br />

Herzegovina<br />

14% on short-term sources and 7% on long-term sources<br />

Bulgaria 10% on domestic sources, 5% on foreign sources and 0% on state sources<br />

Romania 15%<br />

Hungary 2%<br />

Slovakia 2%<br />

Slovenia 2%<br />

Albania 10%<br />

Table 3- Rate of required reserves in Serbia and neighbouring countries<br />

Source: We are all speculators, Politika online, May 20th 2010, http://www.politika.rs/rubrike/Ekonomija/Svi-smo-mishpekulanti.sr.html<br />

Although the Serbian bank sector finished business 2009 with positive financial result (14 banks showed<br />

operating loss, which makes around 17% of total bank sector), the efficiency of use of resources is not enough,<br />

which was reflected on height of return on assets employed (ROA) (1.02%) and on height of return on proper<br />

capital- ROE (4.62%) at the end of 2009. Though in 2009 both rates recorded the mild growth, the Serbian bank<br />

sector still takes lower position in profitability compared to the neighbouring countries. It is cleared that on<br />

efficiency and economy of use of assets employed the big impact had high rates of immobilisation of banks'<br />

financial potentials based on required reserve.<br />

Looking at the current situation, it can be concluded that Serbian bank sector has well endured the main impact<br />

of world’s economy crisis. This situation is, primarily, the result of National Bank of Serbia’s conservative<br />

regulations, with excessive disincentives for corporate and citizens’ lending. In fact, the National bank has in time<br />

recognized the warning signals that pointed to danger of excessive growth in bank lending. During 2007, macro<br />

indicators pointed that loan growth was already in alarming zone and that it was rapidly going towards critical point.<br />

At the same time, the citizens’ loans were growing faster than citizens’ deposits, there were major concentration of<br />

lending in long-term indexed loans, while maturity structure was considerably incompatible (the ratio of loans over<br />

12 months and term deposit with the same maturity was 4:1).<br />

Therefore, the fact that the crisis led to 2.8% fall of economic activity in the country during 2009 cannot be<br />

bypassed. However, none of 34 banks, that are currently operating on the market, had no major problems of<br />

liquidity.<br />

No matter that almost all commercial banks had profit fall during 2009 compared to the previous year, stability<br />

was maintained, even the size of foreign currency savings was increased comparing to the period of before the crisis.<br />

At the end of 2009 the citizen savings was 6.2 billion of Euros or 577 billion RSD, which means that in the last ten<br />

years the total savings were increased for 4500% or 45 times. (Dugalić, 2010, p. 4) This success is that much bigger<br />

considering that at the end of 2008, in the atmosphere of uncertainty due to large number of bank bankruptcy<br />

abroad, almost one billion of Euros of foreign currency savings were withdrawn out of the banks. The reasons for<br />

positive changes regarding the citizen savings should be, primarily, found in: Successful restructuring of bank<br />

sector; restoring public confidence in banks; decision taken to increase the amount of insured deposits from 3000 to<br />

50000 Euros, whose payment in case of liquidation or bankruptcy of bank is guaranteed by state, and the attractive<br />

bank offers, reflected in high interest rates in domestic and foreign currency savings. (Dugalić, 2010, p. 4)<br />

As a confirmation of previous states the 2010 IMF report can be used, in which was estimated that in Serbia in<br />

2009, by the Government and the National Bank of Serbia’s measures, negative consequences of economic crisis<br />

were mitigated and that the country ended that year with better macroeconomic indicators than other Eastern<br />

European countries<br />

The negative consequences of the crisis in Serbia would be more serious if banks, whose founders are from<br />

abroad, did not keep the same size of the loans as in 2008In fact, in March of 2009, on the recommendation of IMF,<br />

ten banks (later that number was increased to 27) signed Financial Sector Support Program in Vienna, so called<br />

Vienna Agreement, committing themselves to reprogram private debt during 2009 that comes to due in amount of<br />

4.5 billion Euros. In that way it was provided to the companies in Serbia to postpone the payments of earlier loans<br />

177


and at the same time take the new, subsidized loans for settlement of regular duties towards the state and suppliers.<br />

Considering that the goal of the National Bank of Serbia is to draw capital and new loans in a market manner, the<br />

Vienna Agreement represented temporary and emergency measurement, whose validity ceased at January 1st 2011.<br />

In accordance with the Vienna Agreement, the National Bank of Serbia prepared special measurements to<br />

support financial stability of the country with goal to preserve the public trust in banking system and to maintain<br />

financial and macroeconomic stability. In addition, those special measurements were available only to those banks<br />

that took the obligation to maintain the exposure to Serbia till the end of 2010, based on the 2008 December level<br />

and maintaining indicators of capital adequacy and liquidity indicators on given level.<br />

For the banks that meet these conditions, the National Bank of Serbia provided the following facilities: new<br />

liquidity sources, i.e. dinar loans with a repayment period of no longer than 12 months, and short-term foreign<br />

currency swap transactions, including the abolishment of reserve requirements for deposits and loans received from<br />

abroad from October 2008 to December 2010 until their maturity date. Also, banks were allowed to include in their<br />

capital, for regulatory purposes, subordinated liabilities up to 75% of their core capital, in the calculation of arrears<br />

on loans whose repayment terms were rescheduled under the framework defined, and for the purposes of their<br />

classification requirements, banks are permitted to apply the subsequently agreed maturity date, as well as to raise<br />

foreign exchange risk ratio from 10% to 20% of their capital. 11<br />

From today’s perspective, the banks signatory not only kept their exposure to Serbia, but during 2009 increased<br />

their investments for 2%. At the same time, in the conditions when the number of non-performing loans increased at<br />

the end of 2009 comparing to the same period in 2008, the banking system was adequately capitalized, and<br />

capitalization of 21.44% is for 9.44% above legal minimum.<br />

In order to assess the ability of the banking system to maintain minimal prescribed capital rate of 12% and in<br />

case of worsening business conditions, the National Bank of Serbia, has conducted three stress tests. During the last<br />

testing, whose results were published in October 2009, the National bank of Serbia used regression model that was<br />

supposed to show the influence of independent variables, such as: The change of GDP, production gap, depreciation<br />

and the change of interest rates to deterioration of loan portfolio and losses according to non-performing loans,<br />

during 2010 and 2011. (Jelašić and Erić Jović, 2009)<br />

16 banks were tested, with the involvement of over 80% of the balance sheet of the total banking system.<br />

Starting from the pessimistic scenario, by which the increase of non-performed loans would be up to 13.9%, the<br />

indicator of capital adequacy would still stay above the legal minimum of 12%Therefore, the biggest impact on<br />

increase of non-performing loans had the fall of economic activity seen through the change of GDP and production<br />

gap. Based on that, it can be concluded that even if the pessimistic scenario would be realized, the banking sector<br />

would be capable of covering all the potential losses with the existing capital and reserves. That means that banking<br />

sector in Serbia has no need for emergency and preventive financing, because the indicator of the capital adequacy<br />

would still be above the statutory minimum, i.e. it would be lower from the initial 22.36% to 16.04% in 2010, i.e.<br />

15.16% in 2011. 12<br />

Comparing to the neighbouring countries, only the National Bank of Serbia took all the relevant prudential<br />

measurements in order to preserve macroeconomic and financial stability and safety of banking sector in the crisis<br />

period. Recognizing the importance of banks for the stability of the financial system in Serbia, the National Bank led<br />

anti-cyclical policy, and took the measurements regarding the minimizing risks of loan growth. In that way strong<br />

absorbers to cover unexpected losses were created and banking sector was capitalized adequately and met the crisis<br />

highly liquid.<br />

e<br />

a<br />

s<br />

u<br />

r<br />

e<br />

Prudential measurements<br />

11 Vienna’s Initiative, National Banks of Serbia, http://www.nbs.rs/export/internet/cirilica/18/18_6/index.html.<br />

12 Banking sector highly capitalized and resistant to macroeconomic shocks, National Bank of Serbia, 13. 01. 2010,<br />

http://www.nbs.rs/internet/latinica/scripts/showContent.html?id=3876&konverzija=yes<br />

178


High level of required<br />

reserve<br />

The exposure limits<br />

and risk concentration<br />

Measurements with the<br />

goal of slowing the loan growth<br />

Strengthening the<br />

foreign currency liquidity<br />

Memos on cooperation<br />

with international institutions*<br />

Programs aimed at<br />

maintaining the exposure of<br />

banks<br />

Serbia √ √ √ √ √ √ √ √<br />

Croatia � √ √ √ √ � √ √<br />

Hungary � √ � √ √ √ √ √<br />

Romania √ √ √ � √ � √ √<br />

Bulgaria � √ √ � √ � √ √<br />

* On programs of consolidated supervisions of bank groups or "home host" supervisions for more comprehensive risk view<br />

** Mainly on relation: Central bank-supervisor-Ministry of Finance<br />

Table 4- Measurements taken by supervised institutions<br />

Relaxation of the<br />

regulatory framework<br />

Memos on cooperation<br />

with domestic institutions<br />

regarding the stability<br />

preservation**<br />

Source: Mira Erić Jović, Serbian banking sector in the global financial crisis, presentation, Palic, May 2009, p. 14.<br />

It can be concluded that the National Bank of Serbia, establishing the set of prudential measurements, 13 wanted<br />

to mitigate the negative effects of crisis through improving liquidity, solvency preserving and strengthening trust in<br />

the banking system.<br />

4. Open questions and prospects<br />

Evidently, global crisis didn’t have a major impact on Serbian banking system, which is still comparatively stable,<br />

with high liquidity and adequate capitalization. First of all, there were no direct risks in terms of investments in<br />

mortgage loans securitizations and other high-risk financial instruments that are the basis of global financial crisis.<br />

Furthermore, restrictive measures of NBS, that caused high liquidity, adequate capitalization and overestimated<br />

reservations for uncollectible loans, turned out to be the advantage of national banking system in comparison to<br />

other countries from the region. At the same time, these measures pretty quickly mitigated negative psychologically<br />

induced factors that lead to widespread withdrawal of citizens’ funds on the very brink of the crisis. Having in mind<br />

that the banks have successfully met citizens' demands for withdrawal of funds, already in December 2008, not only<br />

was the outflow stopped but there was also new deposit inflow. This, practically, preserved the credibility of the<br />

banking sector.<br />

However the crisis had its impact on other sectors, which calls for highly coordinated actions of the<br />

Government, National Bank of Serbia and all other relevant institutions in order to maintain the credibility of<br />

financial sector, enable attraction of foreign capital and improve Serbia's reputation. In accordance with afore<br />

mentioned, by the beginning of 2011, Serbian Government adopted a programme for subsidized loans for the<br />

economic subjects and citizens, with the aim to ease the negative effects of the crisis. Seven billion dinars have been<br />

allocated for liquidity loans, financing of working assets and loans for export businesses with subsidized interest<br />

rates. Even though this kind of lending can stimulate various frauds, at this point it is a necessary action that will<br />

boost economic activity and mitigate negative effects of recessive trends of real sector.<br />

A while later, more precisely, in April 2011, the Government has adopted measures for citizens’ debt<br />

rescheduling, that will reduce monthly instalments of long-term loans for the period of two years. Additionally,<br />

grace period of two years will apply for all the loans with maturity date longer than 12 months, except for subsidized<br />

consumer and cash loans and loans with already arranged grace period and rescheduling. At the moment of adopting<br />

the measures, 11 leading banks accepted the debt rescheduling programme. At the same time, National bank of<br />

Serbia has adopted the decision that ensures greater flexibility of banks in support for citizens and enterprises.<br />

Actually the limit on monthly instalment deduced from salary was abolished, so now total monthly debt obligations<br />

13 Required reserves for foreign currency loans have been abolished, the maturity date was extended by one year, conversion of indexed credits<br />

into dinars was enabled, as well as premature payment, ratio citizens’ loans/capital was increased from 150% to 200% etc.<br />

179


is in ratio with regular net monthly income reduced by the minimum consumer basket for the first adult household<br />

member. Furthermore, the clients no longer have to put down a deposit or down payment of 30% while applying for<br />

a foreign currency loan. National Bank of Serbia has also reduced the period in which the debtor should discharge<br />

obligations towards the bank for the purpose of a more favourable classification according to delay criteria, from six<br />

to three months, and the regularity in discharge of obligations from 60 to 30 days within the period.<br />

Even though by the middle of 2010, uniform rates of allocation to required reserves (as shown in table 3), by the<br />

beginning of 2011 a decision on introduction of differentiated rates depending on maturity of dinar and foreign<br />

funding sources. Hence, for dinar fund sources with maturity up to two years required reserve rate will remain 5%,<br />

while required reserves will not be calculated for dinar sources with maturity over two years. For foreign currency<br />

funding sources with the maturity up to two years the rate is 30%, while it remains 25% for the maturity over two<br />

years. At the same time the allocation of dinar portion of the foreign currency required reserves was calculated at<br />

different rates, 15% for sources with up to two years maturity and 105 for sources over two years maturity.<br />

The basic objective of this decision related to controlling the inflation, i.e. preventing large amounts of dinar on<br />

the market in the state of high inflation. Moreover, the aim was to achieve more balanced rates of required reserves<br />

that ranged from 9-43% in the previous period. Starting from different effects the new decision will have on banks,<br />

depending on their activities in the previous period, compliance of calculation and allocation of required reserves<br />

will be conducted in three instalments starting from February 17, and ending with the calculation on April 17 2011.<br />

The biggest dilemma about this new decision concerning required reserves rates addresses the increase of dinar<br />

portion in foreign currency sources. Actually, the banks are afraid that higher dinar required reserves will reduce the<br />

offer of dinar loans and increase the financing costs for banks. This measure is somewhat in conflict with<br />

dinarization on which the Government insists on. Virtually, a measure is introduced to limit dinar liquidity, which<br />

leads to a question: how will the liquidity be provided for dinar loans? Having that in mind, a rise of loan prices is<br />

expected, which will eventually depend on competition and individual bank policy,<br />

In order to maintain the credibility of overall system of financial intermediation, it is necessary to pay more<br />

attention to protection of financial services consumers, as it can minimise the examples of bad practice. To that<br />

purpose, by the end of 2010, a Draft of The Law on Financial Services Consumer Protection was adopted, which<br />

conflicted the opinions of National bank and Association of Serbian banks. The aim of this new Law is regulation of<br />

protection of rights and interests of financial services consumers, better position and informing of the clients. It<br />

should provide better information flow and rights of clients in their cooperation with business banks. Even though<br />

the draft was designed in accordance with European directives, banks are completely right when they point out<br />

certain controversial regulations that do not meet interests neither of banks nor the clients. If such a draft was to be<br />

adopted, certain types of loans would definitely have to be abolished or their price should have to be increased. This,<br />

primarily, applies to credit cards and dinar loans with fixed interest.<br />

Finally, it should be noted that even though Serbian banking sector recorded a slight rise of the indicators of<br />

financial strength and results by the end of 2009, caution should not yet be abandoned in favour of too much<br />

optimism. Economy facing recession in 2009 and the fact that there is a high functional dependency between<br />

economy and banking sector, represent the key challenges for stability of banks in Serbia. This analysis emphasises<br />

that the main risks of banking sector are caused by the prolonged recession and the growing lending risk, due to<br />

foreign currency borrowings of companies and citizens (in the structure of lending portfolio of banks, even 75% of<br />

the loans are indexed in foreign currency) In the light of these facts, an increase of uncollectible loans is expected in<br />

the upcoming years. Having in mind that granted loans are tied to Euro or Swiss Franc, stability will depend on<br />

monetary policy of NBS. Amidst the expected challenges in the country and abroad, priorities of banking sector in<br />

Serbia must be based on efficient risk management and quality of assets, in order to continue the uptrend of capital<br />

adequacy.<br />

5. References<br />

Annual Report 2001. National Bank of Yugoslavia. http://www.nbs.rs/internet/latinica/90/index.html.<br />

Banking Sector in Serbia 2009-2010. Business Magazine. 31.01. 2010. http://www.naslovi.net/2010-01-31/poslovnimagazin/bankarski-sektor-u-srbiji-2009-2010/1521472.<br />

180


Banking Supervision – Report for Fourth Quarter 2009. National Bank of Serbia’s Supervision Department.<br />

http://www.nbs.rs/export/internet/cirilica/55/55_4/index.html.<br />

Dugalić, V. (2010). From Banking Point of View – Introduction. Banking. Association of Serbian Banks. Belgrade.<br />

No. 1-2.<br />

Erić Jović, M. (2009). Serbian Banking Sector in the Global Financial Crisis. presentation. Palic. May.<br />

Financial Stability Report, National Bank of Serbia. September 2008.<br />

http://www.nbs.rs/internet/latinica/90/index.html.<br />

Jelašić, R., & Erić-Jović, M. (2009). Current Monetary and Macroeconomic Trends – Results of Stress Tests of<br />

Serbian Banking System. National Bank of Serbia. Belgrade. October 8.<br />

http://www.nbs.rs/export/internet/latinica/15/konferencije_guvernera/prilozi/20091008_amk_prezentacija.pdf .<br />

Kokotović, S. (2008). Effects of NBS Measures on Banking System – Review 3, Reviews: Global Financial Crisis<br />

and Serbia. Quarter monitor No. 15. October-December.<br />

Kovačić, I. (2006). Changes in Serbian Banking 2003-2006. Survey – Republic of Serbia, No 2. Belgrade.<br />

National Bank of Serbia. Banking Sector Highly Capitalized and Resistant to Macroeconomic Shocks. 13. 01. 2010.<br />

http://www.nbs.rs/internet/latinica/scripts/showContent.html?id=3876&konverzija=yes.<br />

Reinhart, R. C., Rogoff, S. K. (2009). Is the 2007 US Sub-prime Financial Crisis So Different? A international<br />

historical comparison. Panoeconomicus. Vol. 3.<br />

Serbian Banking Sector 2009 – Analysis of Financial Position and Financial Results. Association of Serbian Banks.<br />

http://www.ubs-asb.com/Portals/0/vesti/76/Analiza2009.pdf.<br />

Vienna's Initiative, National Bank of Serbia, http://www.nbs.rs/export/internet/cirilica/18/18_6/index.html<br />

We are all speculator., Politika online. May 20th 2010. http://www.politika.rs/rubrike/Ekonomija/Svi-smo-mishpekulanti.sr.html.<br />

Živković, B. (2008). Financial Sector – Banking. Within the study: Reforms in Serbia: Achievements and<br />

Challenges. Center for liberal-democratic studies. Belgrade. June.<br />

181


COLLECTION MANAGEMENT AS CRUTIAL PART OF CREDIT RISK MANAGEMENT DURING<br />

THE CRISIS<br />

Lidija Barjaktarovic, Snezana Popovcic- Avric, Marina Djenic,<br />

Faculty of Economics, Finance and Administration, FEFA, Serbia<br />

Email: lbarjaktarovic@fefa.edu.rs, www.fefa.edu.rs<br />

Abstract: Collection management is one of the major areas of today's modern banking. With the appearance of the economic crisis<br />

and coming into general illiquidity in the economy itself, banks increasingly have to manage portfolios with constant non-perform<br />

loans increase (from 2% in average up to 20% in the area of Europe) which has direct impact on the quality of the portfolio (in terms<br />

of classification, reservation and profitability). Influence is reflected both in profitability through the income statement of commercial<br />

banks (in its early stage) and the capital of the banks themselves (in the late stage). For that reason management have to manage<br />

collection management in a way to create the best model that can retain or improve the quality of credit portfolio. This issue is one of<br />

the biggest challenges for the banking division.<br />

The paper points to the complexity of the collection model and proposes a new model for the collection for corporate loans.<br />

Developed banking markets, have fully developed mechanisms, procedures, and organized places where trade their bad debts and,<br />

accordingly, advanced collection model. Domestic practices will just follow the practice of developed countries.<br />

Key words: Collection management, credit risk, NPL, LGD, specific provisioning<br />

JEL classification: G21<br />

1. Introduction<br />

Collection management has impact on bank’s profitability and all risk parameters (Loss Given Default, Standard<br />

Risk Cost, and Risk Adjusted Price), decrease of new non-performing loan (NPL) portfolio and improvement of the<br />

quality of credit portfolio. During the crisis relation between NPL and total credit portfolio is increasing. Before the<br />

crisis this relation in Serbia was between 3% and 5%, now it is between 10% and 15%, and in some banks up to<br />

20% (at the end of third quartile of 2010 in accordance with the data of National bank of Serbia). This bad structure<br />

directly increases the quality of portfolio and increases its cost. On this way additional exposure is obligatory but<br />

limited, because it is hard to cover the costs of existing portfolio.<br />

In order to improve the quality and profitability of credit portfolio, we propose collection model for corporate<br />

loans (on the basis of analysis of current credit portfolio data in Erste bank Serbia). This model has practical use in<br />

Erste bank Serbia from October 2010. The real benefit of it we will now during this year.<br />

2. Collection management – definition and importance<br />

Collection management is the process of planning, organizing, control and monitoring of credit customers of the<br />

banks with the purpose to collect credit receivables. Also, it includes all activities within the bank and in relation<br />

with the customer in order to execute efficiently collection of its credit receivables. The bank should be aware that<br />

only partnership and close relationship with the customer can build healthy portfolio without delay and problems in<br />

collection of due debts.<br />

Collection management is the complex part of credit process which involves different organizational parts of the<br />

bank and employees with different competence. It is important to achieve synergy in their interaction and<br />

cooperation. Also, it is important to emphasize that successful collection management depends on proper monitoring<br />

of external factors and information.<br />

182


In accordance with Basel II 1 credit portfolio of the bank consists of: 1) Performing loans - PL (live credits<br />

which are not declared due and payable or it is the credit where principal amount and interest in repayment do not<br />

have delays longer than 90 days) and 2) Non-Performing Loan – NPL (loans which are declared due and payable or<br />

the credit is not in the accordance with the credit contract terms and conditions).<br />

The most important benefits which come with the successful collection management are: 1) increase of the<br />

banks’ profitability through usage of the reservation or through other incomes, 2) decrease of effects of new NPL<br />

and improvement in the portfolio quality, 3) impacts on the main risk parameters: LDG (Loss Given Default), SRC<br />

(Standard Risk Cost) and RAP (Risk Adjusted Price).<br />

The most important influence of collection management is the decrease of reservation (which can be booked in<br />

the profit and loss account or on the position of the equity) which has impact on increase of profitability or decrease<br />

the equity. In the case of the group of customers which have defined level of the reservation on their group level,<br />

collection management has small impact because the level of their reservation is low. But in the case of the small<br />

companies or the companies which are not the part of any group, the level of reservation in relation with the gross<br />

level of receivables is relatively high (30%, 50%, 70%,…), the successful collection management releases that part<br />

of the cost in the profit and loss account or in the equity.<br />

For example 2 the credit customer has matured obligation toward the bank in the amount of EUR 1.2 million,<br />

where 1) EUR 1 million representing a principal amount with regular interest rate is matured and it is the balanced<br />

part of the receivable. 2) EUR 0.2 million represents penalty interest rate and fees, which is booked off-balance<br />

within the bank. It means that the total matured receivables of the credit customer toward bank are EUR 1.2 million.<br />

Balanced part of the receivable is reserved with EUR 0.6 million, and off-balance part of reservation is EUR 0.2<br />

million, which means that the total reservation is EUR 0.8 million.<br />

In the case that the bank, sells the collateral at EUR 1.2 million, the bank will decrease directly the cost in the<br />

profit and loss account through the decrease of the reservation in the amount of EUR 0.6 million, and income on the<br />

basis of other income in the amount of EUR 0.2 million (because the bank managed to collect off-balance part too).<br />

This way the collection management impact on the profitability of the bank represents an increase of the profit in the<br />

amount of EUR 0.8 million.<br />

In the practice this is a very important benefit, especially for the banks whose portfolios show stagnation or<br />

decreasing trends.<br />

On the other hand, controlled increased of NPL portfolio is one of the most important tasks which bank has<br />

during the year. This task has become very demanding, especially with the appearance of the world economic crisis,<br />

because in case of many banks their credit portfolios had stagnation or decreased trend, while NPL had natural<br />

growth. At the same time the economic crisis had impact on NPL growth, which resulted in increase of portion of<br />

NPL in the total credit portfolio. If this percentage was between 3% and 5% before the crisis, now it is between 10%<br />

and 15%, and very often it goes up to 20%. This bad structure directly determines the quality of portfolio and it’s<br />

pricing (increasing). This way, in order to increase the volume, additional exposure is obligatory but limited,<br />

because it is impossible to cover the costs of the existing portfolio. 3<br />

At the end it is important to mention that the collection management and management of NPL is the most<br />

important determinant of PD (Probability of Default) and LGD (Loss Given Default).<br />

3. Proposal of the collection management model within the credit risk management<br />

Credit risk is the probability that the bank will not be in position to collect total receivables from the customers, i.e.<br />

principal amount and all connected interest rates and fees. Having in mind the cause of the credit transaction there<br />

1 Relevant regulation for banking risk management is determined by the Basel Rules. Also, important regulation for collection<br />

management is local regulation of each country connected to the performing business of legal entities, process of liquidation or<br />

bankruptcy of the company and laws connected to the collaterals (mortgages, pledge, etc.).<br />

2 The example is based on the current law regulation in Serbia.<br />

3 The examples are based on the average of the biggest bank on Serbian market. Source: National Bank of Serbia.<br />

183


are 3 types of credit risk: 1) default risk (exists at the moment of loan approval), 2) premium credit risk (exists at the<br />

moment of usage of the loan and has impact on problems in repayment of the loan) and 3) risk of the worthiness of<br />

the credit rating (exists in the moment of the loan repayment). It is important to emphasize that the credit risk can be<br />

monitored in the banking book (critical threat on the credit portfolio which has impact on the banks in liquidity) and<br />

trading book of the bank.<br />

The bank is successful if it fulfils the following criteria: 1) volume of new approved loans and increase of the<br />

credit portfolio in accordance with the defined targets, 2) profitability of the bank, and 3) the level of nonperforming<br />

loans. Therefore, we may conclude that the successful collection management is one of the most<br />

important tasks for the bank.<br />

Collection management model should cover the following areas: 1) aims of the model, 2) organization of the<br />

model, 3) instruments of the model, 4) control and monitoring within the model (PL and NPL).<br />

3.1 Aims of the collection management model<br />

The basic aims of collection management of the bank’s credit portfolio are: 1) regular servicing of the credit<br />

commitments of the customers, 2) minimizing the delay in servicing the credit commitments toward the bank, 3)<br />

minimizing the number of NPL in credit portfolio.<br />

The advanced aims of collection management of credit portfolio of the bank are: 1) synchronizing additional<br />

costs on problem loans with collection management of receivables of problematical customers, in order to have the<br />

lowest volatility of the costs on profit and loss account; 2) predicting the potential problems and future allocation of<br />

the costs. Advanced collection management means that the costs of one year grow progressively, from 0 to the<br />

defined value for particular year. Only through the constant cost it is possible to predict more precisely the total<br />

result of the bank. Volatility of those costs is not desirable.<br />

3.2 Organization of the collection management model<br />

The organization of the model includes: 1) classification of the debtors in the model, 2) participants and their tasks<br />

within the model.<br />

There is a wide range of criteria for dividing the credit customers of the portfolio, such as: exposure, industry,<br />

maturity, rating of the customer, diversification in accordance with the delay basket (in the practice: delay up to 30<br />

days, delays between 30 and 60 days, delays between 60 and 90 days and delays above 90 days). Banks try through<br />

credit policy to define aims which represents the optimal diversification of the portfolio, including own needs,<br />

market parameters and regulation frame.<br />

In the context of collection management there are two critical determinants of the portfolio quality and<br />

diversification: 1) rating of the customer (represents the probability that the customer will not fulfil commitment on<br />

due date. The rating of the customer consists of two parameters: quantitative and qualitative criteria. After the<br />

defined rating of the customer, there are two important groups of customers within the portfolio: liquid and nonliquid<br />

customers, i.e. PL and NPL) and 2) delay in fulfilling the commitments toward the bank (i.e. delay basket).<br />

For final segmentation of the customer in the portfolio, expert opinion is relevant (based on individual assessment of<br />

risk management division of the bank). The main subjects in creditor-debtor relation are creditors (banks) and<br />

debtors (companies).<br />

Banks are the most important players on the financial markets. Central banks through business banks execute<br />

monetary and credit policy. In terms of collection management and assessment of credit risk within the bank the<br />

following divisions are relevant: corporate division, risk management division (corporate credit risk management,<br />

collateral management and collection management), credit committee, legal division and back office (credit<br />

administration). It is important that they have good cooperation. All of those divisions and departments have precise<br />

instructions for executing a loan application. Corporate division is important in daily cooperation with the customer,<br />

but also at the time of approving and monitoring the loan. Good communication and partnership relation is essential<br />

in collection management process. Risk management division is responsible for individual credit risk assessment in<br />

the following segments: initial crediting, collection management and collateral management. Tasks of risk<br />

184


management division are: 1) restructuring: business renewal or refinancing, monitoring of the problematic loons,<br />

watch lists, stress renewal or refinancing, 2) liquidation: collection from the bankruptcy, collection from the<br />

collateral, sale of business, sale of receivables, 3) reporting and analysis: delay reports, work-out reports,<br />

provisioning management, help in budgeting process.<br />

Debtor (company) represents a crucial part of the collection management. At the same time the company is the<br />

heart of the financial restructuring. The reasons why the company can have the problems in repayment of the loan<br />

are different, and it is very hard to identify them at the beginning. Some of the causes of the problem can be: 1)<br />

management (35%), market – competition (30%), financials (20%) and other (15%). 4 It is evident that the<br />

management is the most responsible for the new problematic situation. When entering the restructuring process, the<br />

change of the management can be a very hard and painful issue for the shareholders of the company. If the<br />

diagnostic of the new problems clearly showed it, it means that this unpopular measure should be implemented as<br />

soon as possible.<br />

Also, it should be mentioned that in Serbia, in many cases, the owners of the companies are at the same time the<br />

managers of the company. In that situation, it is very difficult to talk with the owners and persuade them that they<br />

are bad managers.<br />

Banks are usually the biggest creditors of the company. It should be noted that the approval of the loan is the<br />

main business of the bank, comparing to the other creditors of the company. So, when the company is faced with the<br />

financial problems banks should be ready to take proactive approach in solving the new situation. It means that the<br />

bank will: 1) start negotiation with other banks creditors of the same company (or group of connected companies)<br />

and 2) take action toward the company.<br />

3.3 Architecture of the collection management model<br />

Every house stands on its own foundations. There is no successful model without clear instruction. Organization of<br />

the process is the pillar and basis of further development and improvement of this process. Organization of the<br />

model is established on bilateral basis between the creditors (bank) and debtors (companies). But also, it should be<br />

established widely, including the market and current regulation. Important parts of organization are collection<br />

management process for PL portfolio and NPL portfolio.<br />

Collection management of PL portfolio consists of two areas: 1) processes and actions within the bank which<br />

has final action toward the client, 2) processes and actions toward the client. Those processes are in the function to<br />

present bank’s position and attitude toward the credit customer. The first process within the bank is fluctuation, but<br />

the second process should be always consistent and linear (it should be standardized; defined by the procedures of<br />

the bank adopted by the supervisory body).<br />

Collection management of NPL portfolio is very hard to be standardized. In the practice, there are a huge<br />

number of the credit files which have discrepancy and its solutions are not directly connected with the standards.<br />

The possible ways for solving the problem are: 1) moratorium (subject of analysis and discussing the strategy), 2)<br />

implementation of the adopted decision in terms of collection of receivables from the core business of the company<br />

(restructuring, refinancing, restructuring of the business, etc.), 3) implementation of the decision that the collection<br />

will be done from collection of collateral.<br />

3.4 Instruments of the collection management model<br />

The moment of the loan approval is crucial for making a final decision about the possible way of collecting the<br />

receivable. At the time of analyzing and approving the loan, the bank considers two possibilities for credit<br />

repayment: 1) source of repayment can be cash flow from regular (core) business, i.e. primary source of repayment<br />

(CF1) and 2) source of repayment can be collateral of the loan i.e. secondary source of repayment (CF2).<br />

4 This statistic represents a specificity of the Serbian market (based on the experience of risk management divisions of Serbian<br />

banks). Source: Bibliography 7.<br />

185


Core business is the main source of income and possibility for repayment of the loan. When granting the loan,<br />

banks put the main focus on the core business, i.e. estimation whether the regular activity of the company will be<br />

enough to cover the commitments. The credit criteria are defined by credit policy of the bank. Common parts of<br />

those standards are: 1) general requirements (minimal qualitative criteria, additional qualitative preconditions,<br />

minimal quantitative criteria, prohibit type of transactions), 2) financial requirements in accordance with the<br />

segmentation of the customers (turnover – t/o is criteria for segmentation and the type of financing, for example<br />

project financing has different criteria), such as: rating (which determine the ratios which follows; for better ranked<br />

credit customers, lower ratios are required), ratio ST debt toward t/o (10%-40%), collateral coverage (min 20%,<br />

ratio equity toward LT debt (60-100%), ratio collateral toward total exposure (20%-75%). Cash covered loans<br />

(including bank’s guarantee issued by the first rank bank), overdrafts and loans with low risk are excluded from<br />

those criteria. 5<br />

The financial criteria should be revised in case of problematic situation, i.e. at the moment of difficulties in<br />

repayment of the matured obligations. The most important tasks of collection management are to synchronize the<br />

new situation with the customer’s business. This is the approach which makes the biggest benefit for the creditor and<br />

debtor. At the moment of restructuring of the loan initially defined financial standards are decreasing, up to breakeven<br />

margin. Under this level, business restructuring does not make sense. Also, it should be stressed that any<br />

business restructuring is not good restructuring. Thus, it is necessary, before any restructuring, to answer the<br />

question whether we as a creditor believe in this business? Before we answer this question, we should provide a<br />

detailed and serious analysis.<br />

The bank uses collateral, as secondary source of repayment of the loan, when the core business of the company<br />

stopped. All instruments which bank use as secondary security can be classified as follows: mortgages, pledge,<br />

corporate guarantees, assignment of receivables, and other collaterals. The crucial thing is to determine the proper<br />

value at the time of approving the loan, and to reconsider the value of the collateral at the moment of collecting the<br />

receivables.<br />

3.5 Control and monitoring within the collection management model<br />

PL and NPL clients are subject of control and monitoring within the collection management model. It means that it<br />

should be recognizable when PL customer has problems in repayment and which proactive measures the bank will<br />

take toward the customer in order to avoid delay. Also, the model identifies in NPL portfolio which problems may<br />

appear and which analysis should be done in order to minimize the negative effect of new problems in NPL. 6<br />

As we have already mentioned, the model consists of two analyses: quantitative analysis and qualitative<br />

analysis. Quantitative analysis represents changes which already appeared in customer’s business and may affect the<br />

regular repayment of the loan and core business of the customer. All early warning signs can be categorized as<br />

follows (big changes in customer’s behaviour, market data, problems in daily business and signs of fraud).<br />

Qualitative analysis represents a portfolio analysis on the basis of historical data. Historical data can be internal<br />

(internal data base) and external (market data). Internal indicators are: days of delay, rating, industry and t/o through<br />

the account within the bank. External indicators are: blockade of the account, financial data (t/o, gross profit margin,<br />

EBIT, net profit, total assets, equity, short term loans, long term loans, receivables, payables, inventories).<br />

On the basis of quantitative and qualitative criteria, all credit customers of the bank are divided in 3 zones in<br />

accordance with the risk level: 1) red zone – the most risky customers of the portfolio, 2) yellow zone – the zone of<br />

medium risk, 3) green zone – zone of the low risk. After that corporate credit risk manager sends information to the<br />

responsible account manager which customers are potentially problematic in the future in order to organize the<br />

meeting with such customers and prepare proper strategy for the customers i.e. collection of the receivables.<br />

Practically it means preparing of review application. Generally, the risk management division monitors credit<br />

portfolio on permanent basis, but if they assessed that it is necessary they can monitor the credit risk on particular<br />

customer i.e. loan.<br />

5 The conclusion on the basis of the experience of Serbian banks in the previous period. Source: Bibliography 7.<br />

6 Erste bank a.d. Novi Sad implemented presented model in the second part of 2010.<br />

186


4. Collection management impact on credit portfolio of the bank and its importance for credit risk<br />

management as whole<br />

Collection management has direct impact on the bank’s profitability through the cost of reservation. Cost of<br />

reservation is the result of bank’s assessment that determined part of credit portfolio that will not be collected in the<br />

future. It means that the bank will book this reservation in the future as loss in financial reports. So the estimation of<br />

the level of reservation, especially for the NPL part of portfolio is crucial for the bank. On this way, risk<br />

management division manages the total pull of reservations which considers as relevant to cover future loss of credit<br />

portfolio.<br />

Calculation of individual change of value represents the total effect of the executed collection from the core<br />

business of the customer or by selling the collateral or by bankruptcy of the company in the following period, by<br />

using net present value with effective interest rate as discounting factor. Estimated non-collected part of the<br />

principal amount represents a particular change of the value. Total expected value of the collection has two sources:<br />

core business of the company (cash flow 1 – CF1) and collaterals or bankruptcy of the company (cash flow 2 –<br />

CF2).<br />

Core business represents additional assessment of the creditworthiness of the customer. Bank on the basis of<br />

current company’s situation and projections of company’s business makes decision in which period the company<br />

will be able to serve the commitments toward the bank and how long it will last. It is evident that this assessment is<br />

specific and more restrictive comparing to the initial approval of the loan. The new analysis has following crucial<br />

questions: 1) Is the management capable to manage the company? 2) Is the company blocked by other creditors in<br />

the amount which is hardly achievable for the debtor? 3) Does the company have the market and in which way is the<br />

position changed? 4) Are the financial projections minimally on break-even point? 5) What are the structure and the<br />

value of the collateral? After analysing those questions, the bank is in the position to estimate the perspective of the<br />

business and possibilities of credit repayment. New restructuring of the credit commitments can be executed through<br />

official or silent restructuring. Both ways of restructuring are monitored on monthly basis. If the customer does not<br />

have delay in repayment, we can say that the restructuring is successful.<br />

Collaterals or bankruptcy represents assessment of collection of receivables on the basis of collaterals which<br />

bank has on disposal (mortgages, pledges, and contracts on goods or receivables). Estimation of cash flow is based<br />

on the bank’s expectation when and for how much the collateral will be sold in the future. The bank uses a<br />

conservative approach for collection of receivables.<br />

The bank uses the model which helps in adequate estimation of two segments of collection from collateral:<br />

amount and term. Further sub-criteria of the model allow better estimation of the value of the collateral such as:<br />

quality, the level of regional development, the level of market development and collateral conditions. We can say<br />

that the estimation of the collection of receivable from bankruptcy and core business, through the restructuring of<br />

the loan or through the restructuring of whole business is subject of individual assessment and it is very hard for<br />

quantification.<br />

At the end, sum of present value of expected collection from core business (CF1) and present value of expected<br />

collection from the collateral or bankruptcy (CF2) represents the value which bank expects to collect on the basis of<br />

maturity receivables. Different from the value of total receivables are reservations i.e. individual change of the value<br />

of particular credit.<br />

5. Non-performing loans in Serbia<br />

The Serbian banking sector recorded credit growth in the previous period. At the end of the third quartile of 2010<br />

credit portfolio achieved level of EUR 14.8 billion (table 1 and figure 1).<br />

Years 2001 2005 2007 2008 2009 2010<br />

Credits 6,326 5,082 9,602 11,598 13,331 14,854<br />

Table 1: <strong>Volume</strong> of credit activity of Serbian banking sector (in mil EUR)<br />

187


Figure 1: <strong>Volume</strong> of credit activity of Serbian banking sector (in mil EUR)<br />

The majority of portfolio represents corporate segment (56%), then retail segment (31%), public sector (11%),<br />

and other (2%). The most problematic issue is growth of the exposure toward the public sector in the previous<br />

period (table 2 and figure 2), because it means increase of consumption and problem of repayment for young<br />

generation.<br />

Type of the customer 2007 2008 2009 2010<br />

Corporate 5,520 7,135 7,826 8,298<br />

Retail 3,818 4,112 4,119 4,587<br />

Public sector 175 196 1,230 1,639<br />

Others 89 155 156 330<br />

Total 9,602 12,598 13,331 14,854<br />

Table 2: Structure of Serbian banking credit portfolio in the period 2007 -2010 (in mil EUR)<br />

Figure 2: Structure of Serbian banking credit portfolio in the period of 2007-2010 (in mil EUR)<br />

Non-performing loans (NPL) of the Serbian banking sector represents 17.8% of the total loan portfolio i.e. EUR<br />

2.6 billion (table 3 and figure 3). If we analyse the structure of NPL we can notice that NPL corporate segment<br />

represents 72% of NPL (it is in accordance with the total portfolio structure), NPL retail segment represents 14%,<br />

and NPL other 13%.<br />

IX 2009 XII 2009 III 2010 VI 2010 IX 10<br />

NPL in total<br />

credit portfolio<br />

Structure of NPL<br />

17.7 15.7 16.5 17.5 17.8<br />

Corporate 78 76 77 78 72<br />

Retail 15 16 15 15 14<br />

Others 7 8 8 8 13<br />

Table 3: NPL of Serbian banking sector including the structure (in %)<br />

188


Figure 3: NPL of Serbian banking sector including the structure (in %)<br />

Having in mind the importance of corporate segment for the growth of one economy we will continue to<br />

analyse the structure of NPL corporate segment (figure 4). Processing industry represents 36% of NPL corporate<br />

segment, 31% trade, 10% construction, 9% real estate, 8% agriculture, 6% hotel and restaurants, 0% education and<br />

0% electricity.<br />

Figure 4: Structure of NPL corporate segment of Serbian banking sector (%)<br />

At the end of 2008 Austrian banks in Serbia established a restrictive credit policy (known as traffic light)<br />

toward corporate segment: 1) red light (decrease of exposure toward defined segment, without new approvals):<br />

construction, production and sale of cars, energetic, textile industry, processing industry and tourism; 2) yellow light<br />

(increased watch): furniture industry and exporters of fruits and vegetables; 3) green light (standard cooperation):<br />

telecommunication, IT and media, agriculture, pharmaceuticals, processing and sale of food and beverages,<br />

transport, services, municipalities and public sector. If we compare this with the current NPL we can notice that this<br />

restrictive credit policy was in place.<br />

Also, it is important to analyse the level of reservation in the financial reports of the banks. Unfortunately, their<br />

level during the crisis had impact on lower profit than it was projected (table 4 and figure 5). Thus the proposed<br />

collection management model (which Erste bank Serbia started to use in the last quartile of 2010) will have positive<br />

effect in the following period. We can consider the results of the implemented model at the end of the third quartile<br />

of this year.<br />

189


2008 2009 2010<br />

Profit 26.08 36.11 24.28<br />

Net income 34.74 20.03 20.70<br />

Cost of reservation 19.56 65 28.41<br />

Table 4: Relevant financial indicators of Serbian banking sector (in bill RSD)<br />

6. Summary<br />

Figure 5: Relevant financial indicators of Serbian banking sector (in mil EUR)<br />

Collection management has impact on the profitability of the bank and equity, through the price of the product, cost<br />

of reservation and deducted item of the equity. The economic crisis, which had impact on the financial and real<br />

sector also stressed the importance of the collection management and necessary changes of collection management<br />

model. The collection management of credit receivables is possible in two ways: 1) restructuring (credit or business)<br />

and 2) selling the collaterals. These two ways should be considered and implemented during the entire credit life<br />

(not only after 90 days of delay in fulfiling credit obligations). Overall and proactive approch is necessary for the<br />

sucessful collection management. The proposed collection management model is aimed to predict problems in<br />

repayment of the loans which are not presently problematic. The idea is to stop further growth of NPL and to<br />

improve the quality of the existing credit portfolio. Arhitecture of the model covers all credit customers – PL and<br />

NPL. The proceses in the model are divided in following way (from the perspective of the creditor – bank): 1)<br />

process and actions within the bank which have impact on the approach toward the customer, 2) processes and<br />

actions toward the customers. Those processes send proper message to the customer regarding the strategy toward it.<br />

The work showed that there is room for further improvement of the collection management. Also, it is evident<br />

that this process is complex and important part of the credit risk management process in the bank. The model helps<br />

the bank to establish adequate collection management process and relation between the bank and debtor (company).<br />

Collection management has a big impact on the level of reservation – general and specific provisioning. The<br />

adequate assesment of collection and successful collection of receivables are the guarantee for reliable and precise<br />

quality of own reservations, which have impact on the profit and loss account of the bank and the equity. The<br />

assesment of specific provisioning is especially important, having in mind their impact on the total pool of<br />

reservation. At the end it is important to emphasize that the collection management represents the mirror of the<br />

overall credit policy of the bank. It is not the product of well-determined input parametars of risk and credit<br />

standards, but alo it is the mirror of the internal functioning of different divisions involved in the credit process.<br />

Therefore, we can say that the successful collection management is the equivalent to the total credit business – from<br />

the moment of opening the credit file until closing of the credit file, and represents the value of the bank and its<br />

credit portfolio on the market.<br />

7. Acknowledgments<br />

This Research Paper was the part of the project “Advancing Serbia’s Competitiveness in the Process of EU<br />

Accession”, no. 47028, in the period 2001-2015, financed by Serbian Ministry of Science and Technological<br />

Development.<br />

190


8. References<br />

Altman, E. I., Hotchkiss, E. (2005). Corporate Financial Distress and Bankruptcy: Predict and Avoid Bankruptcy,<br />

Analyze and Invest in Distressed Debt, 3rd Edition, Wiley.<br />

Barjaktarovic, L. (2009). Risk management, University of Singidunum, Belgrade.<br />

Basel Committee on Banking Supervision (2001). The Internal Ratings- Based Approach – Supporting Document to<br />

the new Basel Capital Accord, (available at www.bis.org).<br />

Basel Committee on Banking Supervision, Internal Convergence of Capital Measurement and Capital Standards<br />

(2006), A Revised Framework Comprehensive Version, Bank for International Settlements, Basel, (available<br />

at http:/www.bis.org).<br />

Caouette, J., Altman, E., and Narayanan, P. (1998). Managing Credit Risk – The Next Great Financial Challenge,<br />

Wiley.<br />

Committee of European Banking Supervision (CEBS): http://www.c-ebs.org / [available data on March 2011]<br />

Crosby, P. and Bohn, J. (2009). Modeling Default Risk, (Moody's KMV Document), Revised 2001, (available at<br />

www.kmv.com).<br />

Institutional Investor Journals (2010). Turnaround Management II: a Guide to Corporate Restructuring, Euro<br />

money International Investor.<br />

Gilson, S. C. (Author), Altman, E. I. (Foreword) (2010). Creating Value through Corporate Restructuring: Case<br />

Studies in Bankruptcies, Buyouts, and Breakups, Wiley.<br />

Grieser, S. and Wulfken, J. (2009). Performing and Non-Performing Loans across the World: a Practical Guide,<br />

Euro money Institutional Investor.<br />

European Commission: http://ec.europa.eu/internal_market/bank/regcapital/index_en.htm / [available data on<br />

March 2011]<br />

Larkin, B. (2008). Restructuring and Workouts: Strategies for Maximising Value, Globe Law and Business.<br />

National bank of Serbia: http:/www.nbs.rs/ [available data on March 2011]<br />

Rating Manuel (2009), Erste Group.<br />

Vasilski, A. (2010). Collection management from the borrower’s perspective during the world’s economic crisis,<br />

The Faculty of Economics, Finance and Administration, Belgrade.<br />

Work-out procedures – Holding Level (2010), Erste Group.<br />

191


HOW THE CROSS BORDER MERGERS AND ACQUISITIONS OF THE GREEK BANKS IN THE<br />

BALKAN AREA IMPROVE THE COURSE OF PROFITABILITY, EFFICIENCY AND LIQUIDITY<br />

INDEXES OF THEM?<br />

Kyriazopoulos George, Applications Professor at Finance Department, TEI of Western Macedonia<br />

Kozani GR-50100, GREECE, Email: kyriazog@teikoz.gr, kyriazopoulosg@yahoo.com<br />

Dr. Petropoulos Dimitrios, Assistant Professor of Agricultural Economy at TEI Kalamatas<br />

Kalamata, GREECE, Email: d.petro@teikal.gr<br />

Abstract. In this paper we try to give an explanation about how the cross border mergers and acquisitions of Greek bank my affect the<br />

profitability the liquidity and the efficiency of them. In the 1st chapter of the paper we describe the Balkan macroeconomic<br />

environment as the favorable conditions and the opportunities that were created, so that the Greek and foreign banks to be pushed to<br />

expand in the particular environment. In the second chapter of the present work we present with details the expansions of the Greek<br />

banks and other European banks in each Balkan country separately. In the present work a list is made of all the mergers and<br />

acquisitions made by the Greek banks in the Balkan countries during the decade 2000-2009. At the end we talk about the positive<br />

influence of the cross border mergers and acquisitions on deposits and loans and therefore on the ROAA and ROAE indexes of the<br />

Greek Banks are shown. The analysis of profitability, liquidity and efficiency with financial indexes takes place for the time period of<br />

2003-2007 and is not expanding on the years 2008 and 2009 (period of the appearance of the financial crisis). The above distinction is<br />

made in order to show greater homogeneity and in order to obtain the estimation of the magnitudes under conditions of economic<br />

stability and progress. The behavior and the values of the financial indexes during the phase of the financial crisis it could become the<br />

object of further research in another paper.<br />

Keywords: bank, banking sector, productivity, efficiency, liquidity, cross-border mergers and acquisitions.<br />

JEL Classification Codes: G2-Financial Institutions and Services<br />

G21 – banks; Other Depository Institutions; Micro Finance, Institutions; Mortgages<br />

1. Introduction<br />

In the past twenty years, cross-border mergers and acquisitions have had an ever increasing role in the process of<br />

bank internationalization. The Greek Banks with the accession of the Greek Economy to the Economic and<br />

Monetary Union, but mainly with the introduction of the Euro into Greece, began to seek new sources of revenues in<br />

new markets, where costs were lower and the interest rate of the loans higher. The Greek banks since the year 2000<br />

adopted aggressive policies of expansion with cross border merges and acquisitions mainly of local banks in the<br />

Balkan countries increasing the magnitudes of their balance sheets, especially with respect to the consumer and<br />

mortgage loans, in an effort to increase by any means the total profitability liquidity and efficiency of their groups.<br />

The aggressive policy of cross border and acquisitions in the Balkan area of the Greek banks was influenced<br />

substantially by the introduction of euro into the Greek economy, which acquired a strong currency and at the same<br />

time a reduction of the foreign exchange risk. This way, actually it was left for the banks only the management of<br />

the credit risk, since the political risk of the Balkan countries had been eliminated with the restoration of democracy<br />

in these countries. The banks offer differentiated products to their clients, which can be adapted to the particular<br />

conditions of life, preferences, risk characteristics and the needs of the population and the businesses of the Balkan<br />

countries. The wave of cross border mergers and acquisitions in the Balkans caused some fears, mainly with regard<br />

the stability of the system, the supply of adequate amount of capital through the granting of loans to small and<br />

medium size firms and to consumers, just as the loan pricing practices towards them. However, with «Basil II»<br />

there are already complete ways of control, while the granting and pricing of loans, shows not to be finally affected<br />

from the buyouts and mergers.<br />

The trend for cross border mergers and acquisitions according to the banking theory is attributed to different<br />

reasons, as for example the optimization of cost, the increase of power in the market, the capability of greater<br />

dispersion of risk, and reasons that have to do with corporate governance and the “creation of an empire” by the<br />

chief executive officers of each organization.<br />

192


2. The Macroeconomic Environment in the Balkan Countries and the opportunities for the Greek<br />

Banks<br />

After the fall of the communist regimes in 1989, much of the banking sector in the SEE countries was still<br />

underdeveloped and centrally planned, while money had only a limited role as a medium of exchange. In general the<br />

banking sector in SEE countries has developed significantly in the past 10 years (1998-2007). However, there are<br />

still many challenges ahead for the banking sector in the SEE countries, while it is less developed relative to the<br />

Central Eastern European transition countries. 1<br />

Turkey starts to have EBRD Index from the year 2008. In the table1 exhibits the improvement of the South<br />

Eastern European (SEE) countries banking sector over the past 6 years, according to the EBRD Index of Banking<br />

System Reform:<br />

Table 1: SEE Countries' Banking System Reform Index 1998-2003<br />

Country Year EBRD Index of Banking System<br />

Reform from 1 to 4<br />

Albania 1998-2003 (2004-2007) 2,0-2,3 (2,67)<br />

Bosnia/Herzegovina 1998-2003 (2004-2007) 2,3 (2,67)<br />

Bulgaria 1998-2003 (2004-2007) 2,7-3,3 (3,33-3,67)<br />

Serbia/Montenegro 1998-2003 (2004-2007) 1,0-2,3 (2,3-2,67)<br />

Fyrom 1998-2003 (2004-2007) 3,0 (2,67)<br />

Romania 1998-2003 (2004-2007) 2,3-2,7 (3,0-3,33)<br />

Source: European banking for Reconstruction and Development (EBRD)<br />

EBRDIndex1-4<br />

According to the EBRD index of the banking system reform, all SEE countries are classified around 2,3.<br />

3,5 4<br />

2,5 3<br />

1,5 2<br />

0,5<br />

0<br />

1<br />

Albania<br />

EBRD INDEX OF BANKING SYSTEM YEAR 2003 and 2007<br />

Bosnia/Herzegovina<br />

Bulgaria<br />

Serbia/Montenegro<br />

Countries<br />

As you can see in the above diagram all countries have improved their EBRD Index except for FYROM that has a<br />

little decrease of 0,3 from the year 2003 to 2007.<br />

According to the EBRD index of the banking system reform, all SEE countries are classified around 3. This<br />

classification means: “there has been progress in establishment of bank solvency and of a framework for prudential<br />

supervision and regulation, while there is significant lending to private enterprises and significant presence of<br />

private banks”. We observe that all the countries have an improvement in the time period 2004-2007 from the time<br />

period 1998-2003.<br />

The macroeconomic progress in the region of the Balkans is strong and economic activity increases steadily (2006:<br />

4,5%-5% on the average) mainly due to the strong domestic demand. In most of the Balkan countries the increased<br />

borrowing of the households has accelerated the rate at which the imports of consumer goods is rising, thus<br />

worsening significantly the trade balance. Something equivalent started in Greece towards the end of the decade of<br />

the 90’s and lasted up to the middle of 2007 the fact that contributed to the present financial crisis. 2<br />

The financial sector in the Balkans is generally healthy. The banks become active in a generally favorable<br />

entrepreneurial environment, with total macroeconomic activity on the rise and under conditions and perspectives of<br />

increasing profitability. The credit expansion, especially towards the households, is strong in all the countries of this<br />

1 Stubos G., Tsikripis I., Banking Sector Developments in South-eastern Europe May 2004<br />

2 Kyriazopoulos G., Petropoulos D., “Does the cross border merger and acquisitions of the Greek Bank in the Balkan Area affect on the course of<br />

profitability efficiency and liquidity index of them?” May 2011<br />

193<br />

Fyrom<br />

Romania


egion, with rates of increase higher than the European average. The quality of the loans made by the banks is<br />

satisfactory with acceptable indexes of loans that are not being paid back to total loans. However, the possibility of<br />

future deterioration of these indexes is visible, as the loan load assumed by the households is increasing at a fast<br />

rate. The percentage of loans in foreign currency (mainly USA dollars and Euros) to total loans is quite high and in<br />

some instances exceeds the usual limits of safety (e.g. in Albania for 2005: 73%).<br />

2.1 The Expansion of Greek Banks in the Balkans 34<br />

The expansion of the large Greek banking groups in the Balkans began at the beginning of the decade of 1990 and<br />

increased in the time period of 1998-2007, and it came about as a result of the gradual convergence of the Greek<br />

banking system and the Geek economy to the standards of the more developed European countries. The showing<br />

signs of coming to maturity Greek banking system created the need of expansion outside of the borders the Balkans,<br />

due to the cultural and geographical proximity were creating conditions of competitive advantages, and thus defined<br />

the new field of Greek bank activity. The gradual adaption of the Balkan countries to free market conditions, the<br />

continuation of structural changes, the massive programs of privatizations and the perspectives of economic<br />

development constituted essential factors for the attraction of the Greek banks and the banks of other European<br />

countries to the region. Initially the expansion of the Greek banks in the Balkans aimed mainly at serving the Greek<br />

businesses that became active in the region seeking new opportunities before the banks wake up. The Greek banks<br />

being familiar with the activity and the credit risk of the Greek enterprises accompanied them in their expansion in<br />

the Balkans. Later on, the Greek banks enlarged gradually their activities in the Balkan area taking advantage of the<br />

rising market in the sector of retail banking and by capitalizing on investment opportunities in sectors such as the<br />

buying of land and the development of commercial and private home zones.<br />

In the time period 2004-2007, the presence of large Greek banks in the Balkans does not constitute a<br />

circumstantial financial exploitation, but is a long term plan of strategic investment, which expresses itself either<br />

through autonomous development or as development through buyouts of local banks. In 2005 the five largest Greek<br />

banking groups had created a network of 958 branches in total, they employed more than 15.000 people, they had<br />

acquired a share of 17% of the granting of loans market and their profits exceeded the amount of €138 million. Also<br />

in the year 2007 from this region came the 20% of their total profits. In some of the Balkan countries the Greek<br />

banks have at the present time a leading position. The Greek banks, through their expansion in the Balkans, have as<br />

a goal to improve their profitability as the retail banking in Greece gradually approaches the levels of saturation of<br />

most of the countries in the Eurozone. Indicatively is reported that in 2005 the granting of loans to the households<br />

and businesses as a percentage of the Gross National Product (GNP) in the Balkan countries was fluctuating from<br />

14,5% (Albania) to 44% (Bulgaria), as opposed to 76% in Greece and 104%in the Eurozone.<br />

3. Cross border Mergers and Acquisitions of Greek Banks in the Balkan Countries.<br />

At the time period of 2003-2007 the Balkans offer significant margins of business development to the banks of<br />

European countries, because of the high rate of credit expansion and of the still low level of development of the<br />

financial markets. However, for as long as the upgrading of the systems and products being offered by the European<br />

banks expands, the Greek banks will have to adapt their strategies in an environment of increasing competition. The<br />

expansion of the Greek banks to the developing neighboring markets of the Balkans and the Mediterranean<br />

remained the central axis of their strategy in the time period 2003-2007 just before the crisis began. However, at the<br />

present time more and more the valuations of the financial institutions in the region are getting larger in such a<br />

degree, that in some cases the total price of the buyout is prohibitive for the Greek banks.<br />

3.1. Greek Banking activity in the Balkan Area 5<br />

At the time period 2003-2007 the Greek banking activity in the regions of the Balkan countries refers to the<br />

countries of Serbia, Fyrom, Romania, Bulgaria and Turkey. In these countries the Greek banks that expanded are<br />

3 The source for numbers and percentages is IMF and ECB<br />

4 Kyriazopoulos G., Petropoulos D., “Does the cross border merger and acquisitions of the Greek Bank in the Balkan Area affect on the course of<br />

profitability efficiency and liquidity index of them?” May 2011<br />

5 Kyriazopoulos G., Petropoulos D “Cross Border Mergers and Acquisitions in the Balkan Countries after the introduction of Euro in Greece”<br />

Esdo Kavala June 2010<br />

194


among the largest banks of Greece, regardless if their origin and the majority of their shareholders are from outside<br />

Greece. These banks are the National Bank of Greece, Piraeus Bank, Alpha Bank, Efg Eurobank, ATE Bank,<br />

Emporiki Bank, and Egnatia Bank, now after a big merger that took place in Greece it’s called MarfinEgnatia Bank.<br />

The country with the greater penetration in the Albanian banking market is Greece. The five big Greek banks have a<br />

strong presence and they concentrate 35,7% of the granting of loans to customers, the USA follows with 16,7%<br />

through the American Bank of Albania and Austria with 13,9%, through the Raiffeisen Bank.<br />

The year 2007 was a milestone for Bulgaria since at the beginning of that year she became a member of the<br />

European Union. In Bulgaria, banking activities coming from the side of Greece and other foreign countries did not<br />

present considerable interest. The total amount of banking mergers and acquisitions in Bulgaria from 2004 to 2006<br />

was $265,97 for the Greek banks. More specifically the Greek banks that played a leading part in the cross border<br />

mergers and acquisitions were the EfgEurobank and the Piraeus Bank, which bought out the DZI Bank and the<br />

Eurobank (Bulgarian) respectively each one. The percentage of participation and the amounts of the buyouts are<br />

shown in table 2 below.<br />

Table 2 The Greek banks that merged and bought out banks of Bulgaria<br />

Greek Bank Buyer Bulgarian Bank Target Participation (%) Buyout Amount $ millions<br />

EfgEurobank DZI Bank 74,26% 200,20<br />

Piraeus Eurobank 99,70% 65,77<br />

Total 265,97<br />

Source: Mergerstat Intellinet Bloomberg Athens Stock Exchange<br />

Moreover, the merger of the premises of the Piraeus bank and its subsidiary Piraeus Eurobank AD during the first<br />

half of 2006 had as a result the creation of a larger bank, that holds the 8 th position in the classification of banks,<br />

outdistancing the Economic Investment Bank, SG-Express Bank and DZI Bank, which were holding the 8 th , 9 th and<br />

10 th position in the classification of banks respectively in Bulgaria in the year 2005.<br />

Predominant position in the banking market of FYROM holds the National Bank of Greece after she bought out<br />

the 68,4% of the second largest bank of the country (Stopanska Banka Skopje) in April 2000 for €60 millions.<br />

Through the Stopanska Banka Skopje the National Bank of Greece has a 23,8% share of the market based on assets<br />

and a 27% market share based on the granting of loans. The Alpha Bank entered the banking market of FYROM in<br />

1999 after buying out the Kredinta Banka, which later was renamed Alpha Bank Skopje. In 2005, the Alpha Bank<br />

Skopje had a market share based on the granting of loans 2,6%. The EFG-Eurobank-Ergasias, the Emporiki Bank<br />

and the Piraeus Bank did not have a presence in FYROM until the year 2007.<br />

Romania became a full member of the European Union along with Bulgaria in January of 2007. The banking<br />

sector of Romania consisted of 39 banks of which 30 are foreign and 1 is state owned. The concentration of the<br />

banking sector is not very high as the 5 of the largest banks control only 60% of the sector’s total capital. In<br />

Romania at the beginning the banking activities on the part of Greece did not exhibit enough interest compared with<br />

the interest shown by other countries. The total amount of mergers and acquisitions in Romania for the time period<br />

2004 -2006 was $81,07 millions for the Greek banks. In 2005, the Greek banks were concentrating in total 13,6% of<br />

the granting of loans to customers. The percentages of participation and the amount of acquisitions from the Greek<br />

banks are shown in the following table 3.<br />

Table 3: The Greek banks that were merged and bought out banks of Romania<br />

Greek Bank Buyer Romania Bank Target Participation (%) Amount of Buyout ($ millions)<br />

ΑΤΕ Mindbank 57,13% 41,07<br />

40,00<br />

NBG Banka Romaneasca 81,65%<br />

Total 81,07<br />

Source: Mergerstat. Intellinet. Bloomberg. Athens Stock Exchange<br />

In Serbia the banking activity on the part of Greece and on the part of the other countries exhibited sufficient<br />

interest. The total amount of mergers and acquisitions in Serbia during the period 2004 -2006 reached the amount of<br />

$973,68 millions for the Greek banks. More specifically the Greek banks that played a leading part in the mergers<br />

and acquisitions in Serbia are the Alpha Bank, the EfgEurobank, the ATE Bank, the National Bank of Greece and<br />

the Piraeus Bank which bought out the banks Jubanka, National Savings Bank, AIK Banka, Vojvodjanska Banka<br />

195


and Atlas Banka respectively each one. The percentage of participation and the amount of buyout are shown in the<br />

following table 4.<br />

Table 4: The Greek banks that merged and acquired banks of Serbia<br />

Greek Bank Buyer Serbian Bank Target Participation (%) Buyout Amount ($ Millions)<br />

Alpha Bank Jubanka 100,00% 206,69<br />

EfgEurobank National Savings Bank 90,20% 93,28<br />

ΑΤΕ Bank ΑΙΚ Banka 20,00% 101,78<br />

NBG Vojvodjanska Banka 100,00% 489,57<br />

Piraeus Bank Atlas Banka 88,23% 34,45<br />

MarfinLaiki Bank Centrobanka 76,00% 47,91<br />

Total 973,67<br />

Source: Mergerstat. Intellinet. Bloomberg. Athens Stock Exchange<br />

The countries with the greatest penetration in the Serbian market are Italy (2005: 18% share on assets), Austria and<br />

Greece (2005: 17% share on assets each one). In 2005, the large banks of Greece exhibited limited capability to<br />

penetrate the Serbian market. At the present however the total presence of the large Greek banks has been<br />

strengthened after the buyout of the 6 th largest bank of the country by the NBG. In 2005, the 5 large Greek banks<br />

were concentrating a market share based on the granting of loans 12,86%, from 8,66% before the buyout of<br />

Vojvodjanska Banka.<br />

In Turkey the banking activities on the part of Greece did not exhibit adequate interest as opposed to other<br />

countries. The total amount of banking mergers and buyouts in Turkey during the period 2004-2006 reached the<br />

amount of $2.770,00 and it had to do with the buyout of a 46,00% shares in the Finansbank by the National Bank of<br />

Greece at the above mentioned amount. The percentage of participation and the amount of buyout are shown in the<br />

following table 5.<br />

Greek Bank Buyer Turkish Bank Target Participation (%) Amount of Buyout ($ Millions)<br />

NBG Finansbank 46,00% 2.770,00<br />

Alpha Altemnatibank 47,00% 246,30<br />

EfgEurobank Tekfenbank 70,00% 182,00<br />

Total 3.198,30<br />

Source: Mergerstat. Intellinet. Bloomberg. Athens Stock Exchange<br />

Indicative of the importance the Greek banks attribute to their activities abroad is the magnitude of investment in the<br />

region of Southeastern Europe. Up to date they have invested more than €6 billion and they operate 1.900 premises<br />

(that represents 1/3 of the total number of premises) 6<br />

Thus the «domestic banks attain by far the first position with respect the number of deals that took place in the<br />

region» 7 , as in a total of 40 across the border deals that were made in the last 4 years in Turkey, Bulgaria, Serbia,<br />

Albania, Romania and Fyrom, the 13 were made by Greek financial institutions and mainly by the NBG, the<br />

EfgEurobank, the Piraeus Bank and the Alpha Bank. Already in a four year time period (2002-2006) of dynamic<br />

development the Greek banks have won significant shares in the markets of the countries where they have engaged,<br />

while they have as a target in the next 2-3 years these countries «to produce» 20-30 %of their revenues and about<br />

10-15% of their profitability. However, it should be noted that the Greek banks were very generous in the evaluation<br />

of the banks –targets that they chose to buyout in the past not so much with concern the absolute magnitudes in<br />

Euros that they paid out as with mainly the indexes of evaluation. This was so either because they were late to<br />

decide, which resulted to increased evaluations, or because the number of acquisitions targets had decreased, which<br />

6 Bank of Greece and Union of Greek Banks «The International Credit System: Challenges and Perspectives», Scientific Marketing, volume of September 2007<br />

7 «With advantage the Greek banks in the Balkans», Georgas V., Eleftherotypia, April 6, 2009<br />

196


esulted to a higher price completion of the deals. One of the most expensive buyouts that took place, as it is<br />

mentioned in the article, was the one of the Bulgarian bank DZI in September of 2006.Eurobank paid $200,2<br />

millions for 74% of the bank, evaluating 6,8 times its accounting value, when other buyouts in the country by<br />

foreign banks were completed at 2,5 times their equity value. Equally expensive was the price paid by EfgEurobank<br />

for the National Savings Bank of Serbia, where it paid 6 times its accounting value, while among the most expensive<br />

were the two large buyouts made by the National Bank of Greece. The first one of the Vojvodjanska Banka for $490<br />

million, an amount equivalent to 5,3 times its accounting value and 150 times of the before taxes earnings and the<br />

second one of the Turkish Finansbank for $2,77 billions an amount 4 times its accounting value.<br />

The Greek banks completed 16 buyouts of banking institutions in the Balkans region during the period 2004-<br />

2006 as it is shown in the following table 6.<br />

Table 6: Mergers and Acquisitions between 2002-2006 in the SEE<br />

Greek Bank Buying out Foreign Bank Country Buyout Percentage (%)<br />

NBG Vojvodjanska Banka Serbia 100,00<br />

Finansbank Turkey 46,00<br />

Banca Romaneasca Romania 98,88<br />

United Bulgarian Bank Bulgaria 99,91<br />

EfgEurobank Dominet Bank Poland 100,00<br />

DZI Bank and Postbanka Bulgaria 74,26 and 100<br />

Universal Bank Ukraine 99,34<br />

Tekfenbank Turkey 70,00<br />

National Savings Bank Serbia 100,00<br />

Bancpost Romania 78,00<br />

Alpha Bank Alternatifbank Turkey 47,00<br />

Jubanka Serbia 99,99<br />

Piraeus Bank Atlas Banka Serbia 100,00<br />

Eurobank Bulgaria 99,85<br />

Egyptian Commercial Bank Egypt 93,47<br />

ΑΤΕ Bank AIK Banka Serbia 20,00<br />

Midbank Romania 74,00<br />

Source: Deloitte<br />

Table 7: Results of the indexes ROAA and ROAE from the cross border mergers and acquisitions in 2006<br />

BANK Equity / Assets (%)<br />

31/12/2006<br />

Loans / Deposits<br />

(%) 31/12/2006<br />

Return on Average<br />

Assets (after tax)<br />

(%) ROAA<br />

31/12/2006<br />

Return on Average<br />

Equity (after tax)<br />

(%) ROAE<br />

31/12/2006<br />

1 National 8,7 77,6 1,6 22,3<br />

2 Eurobank 5,3 140,2 1,4 24,7<br />

3 Alpha 4,7 132,8 1,4 28,3<br />

4 Piraeus 5,0 128,2 1,7 32,4<br />

5 Emporiki 5,1 107,2 0,6 11,3<br />

6 ATE 5,7 67,3 0,7 12,8<br />

7 Postal Savings 6,5 43,0 1,0 14,2<br />

8 Marfin 12,7 87,4 3,0 16,3<br />

9 Geniki 4,4 112,5 -1,3 -25,5<br />

10 Egnatia 7,2 96,2 0,5 6,9<br />

11 Attica 5,3 91,3 0,1 1,5<br />

12 Aspis 5,5 101,5 0,5 8,8<br />

13 Proton 24,6 83,8 3,9 12,3<br />

14 Bank of Cyprus 6,3 68,9 1,3 21,2<br />

15 Laiki Bank 5,5 69,6 1,1 20,6<br />

Source: Calculate from Bank Financial Statements<br />

In the below diagram we can see the course of ROAA and ROAE of the Greek Banks in the year 2006. From the top<br />

5 Greek Banks as we observe Piraeus Bank has the biggest index of ROAE so it has the biggest benefit form its<br />

cross border merges and acquisitions in the Balkans and icresed the wealth of its shareholders more than the others.<br />

197


ROAAandROAE<br />

40<br />

30<br />

20<br />

10<br />

0<br />

-10<br />

-20<br />

-30<br />

National<br />

Eurobank<br />

Alpha<br />

The course of Greek Bank ROAA and ROAE in the Year 2006<br />

Piraeus<br />

Emporiki<br />

ROAA (%) 2006 ROAE (%) 2006<br />

ATE<br />

Postal Savings<br />

Marfin<br />

Banks<br />

The elements that we needed to draw the above diagram are taken from the table 7.<br />

4. The Profitability, efficiency and liquidity of the Greek Banks [8] in the years 2003-2007<br />

Geniki<br />

Egnatia<br />

Attica<br />

Aspis<br />

Proton<br />

Bank of Cyprus<br />

MarfinLaiki Bank<br />

The profitability of the SEE countries banking sector, has improved significantly over the last six years. This was a<br />

result of the general reform of the banking system and the high intermediation spread in these countries. In the<br />

future, the financial institutions in the SEE countries should find different sources of profitability, since the<br />

intermediation spread is expected to fall as the economies stabilize, the interest rates fall and the competition among<br />

banks increase. The financial institutions should seek for these new sources of profitability in the retail banking and<br />

the asset management in the Balkan clients, while they should try to increase their market share. On the other hand<br />

they should control their operating expenses and expand their activities quite careful, in order to minimize their<br />

losses from bad loans. The table 8 below presents two indexes of the SEE countries’ banks profitability, the Return<br />

on Assets and the Return on Equity. [9]<br />

Table 8: SEE Countries' Bank Profitability<br />

Country Year ROA ROE<br />

Albania 1998 and 2002 -1,8% and 1,2% -82,3% and 19,1%<br />

Bosnia/Herzegovina 1998 and 2002 n/a and n/a n/a and n/a<br />

Bulgaria 1998 and 2002 1,7% and 2,1% 15,8% and 16,2%<br />

Serbia/Montenegro 1998 and 2002 n/a and n/a n/a and n/a<br />

Fyrom 1998 and 2002 2,0% and 1,5% 8,2% and 6,9%<br />

Romania 1998 and 2002 0,1% and 2,6% 1,0% and 18,3%<br />

Greece 2003 0,9% 12,8%<br />

EU Large Banks 2003S 0,4% 11,4%<br />

Source: National Central Banks, Bank of Greece, ECB<br />

The analysis of the financial statements of a business includes besides the selection of the appropriate index and<br />

the comparison, without which the resulting conclusions do not have any meaning and most probably they do not<br />

lead to the correct explanation. The comparison makes sense when it is done in relation to time and in relation to the<br />

similar businesses or the sector. This double comparison gives the capability of a more correct explanation of the<br />

indexes and consequently of the business condition (Papoulias, 2000). In order to achieve this we will use the<br />

following indexes [10] :<br />

4.1. Profitability – Efficiency Indexes in Greek Banking Sector<br />

a. Return on Assets (ROA): ROA = Net Profits / Total Assets<br />

This index reflects the administrations capability to use efficiently the financial resources (assets) that has at its<br />

disposition, in order to create profits.<br />

[8] Petropoulos D., Kyriazopoulos G., “Profitability, efficiency and liquidity of the co-operative banks in Greece” ICOAE Athens August 2010<br />

[9] Stubos G., Tsikripis I., Banking Sector Developments in South-eastern Europe May 2004<br />

[10] The above indexes are recorded in the bibliography as the most appropriate indexes for measuring economic magnitudes and characteristics<br />

(profitability, return, liquidity) of the banks.<br />

198


Return on Investment (ROE): ROE = Net Profits / Total Equity [11]<br />

This index reflects the efficiency with which the bank uses the capital of its owners, as it shows the size of profits<br />

that were created by the capital that was invested by the shareholders (owners) of the bank- enterprise.<br />

4.2. Liquidity Indexes in Greek Banking Sector<br />

a Loans / Deposits This index shows the banks’ needs in relation to loans and deposits.<br />

b Total Assets / Total Loans This index shows the proportion of loans that the bank retains. A high value means low<br />

efficiency and low risk.<br />

The comparative analysis of the above indexes to the banking sector as a whole for Greece, in essence includes<br />

the comparative analysis with the commercial banks that operate in Greece. The participation of the co-operative<br />

banks to the totals of the economic magnitudes of the banking sector in Greece is very limited. Consequently, the<br />

corresponding indexes referring to the whole banking sector, essentially are the indexes of the commercial banks.<br />

4.3. Results in Greek Banking Sector<br />

Here are the results of the Greek Banking System for profitability and efficiency.<br />

4.3.1. Profitability – Efficiency in Greek Banking Sector<br />

a. ROA = Net Profits / Total Assets<br />

Analyzing the specific index we could:<br />

� Compare the efficiency among the co-operative banks.<br />

� Observe the efficiency through time.<br />

� Compare efficiency of co-operative banks with the efficiency of the banking sector as a whole.<br />

� Investigate the reasons of the changes through time.<br />

From the analysis of Table 9 we find out, that the efficiency increases 63,7%. Moreover the average of 5 year period<br />

(2003-2007) is 0,90<br />

Table 9 Return on Total Assets (ROA) – Index<br />

Bank 2003 2004 2005 2006 2007<br />

Percentage<br />

change<br />

Average of five year<br />

period of time<br />

Total of banks 0,69 0,65 0,96 1,07 1,13 63,77 0,90<br />

Source: Union of co-operative banks of Greece (ESTE) / Bank of Greece / self processing<br />

ROARates<br />

1,2<br />

1<br />

0,8<br />

0,6<br />

0,4<br />

0,2<br />

0<br />

ROA<br />

2003 2004 2005 2006 2007 Average<br />

Year<br />

b ROE = Net Profits / Total Equity<br />

Analyzing the specific index of efficiency of the shareholders total equity, we can find out if the purpose of<br />

achieving a satisfactory result has succeeded. From the index in the table 10 we suppose that the banks achieved<br />

their purpose, because we can see that the index has upper course. Finally, we observe that the change of the specific<br />

index for the total of the banking sector has an increase of 74,77%. As reasons for the above behavior of the<br />

efficiency of total shareholder equity index could be mentioned the following:<br />

� The increase of the total banks premises, it was followed by improvement, except year 2004, of the way<br />

they are managed.<br />

� Good management of total banks capital.<br />

� High productivity of total banks.<br />

[11] Vasiliou D., Financial Management Patra 1999.<br />

199


Table 10 Return of Total Equity (ROE) – Index<br />

Bank 2003 2004 2005 2006 2007<br />

Percentage change<br />

2007-2003<br />

Average of five year<br />

time period<br />

Total of banks 8,52 7,70 11,80 13,47 14,89 74,77 11,28<br />

Source: Union of co-operative banks of Greece (ESTE) / Bank of Greece / self processing<br />

ROERates<br />

16<br />

14<br />

12<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

4.3.2. Liquidity Results in Greek Banking Sector<br />

ROE<br />

2003 2004 2005 2006 2007 Average<br />

Year<br />

a Loans / Deposits<br />

Analyzing the specific index of liquidity, we can find out the capability of a bank to fulfill its obligations. From<br />

Table 11, we find out that the deposits more than cover the loans, for the total of the banking sector. We, also,<br />

observe that the average for the time period under investigation for the total of the banks is 80,55%.<br />

Table 11 Liquidity (Loans / Deposits) – Index<br />

Bank 2003 2004 2005 2006 2007<br />

Percentage<br />

change<br />

2007-2003<br />

Average of five year<br />

time period<br />

Total of banks 74,16 77,42 79,77 84,88 86,54 16,69 80,55<br />

Source: Union of co-operative banks of Greece (ESTE) / Bank of Greece / self processing<br />

Rates<br />

90<br />

85<br />

80<br />

75<br />

70<br />

65<br />

1. Liquidity=Loans / Deposits<br />

2003 2004 2005 2006 2007 Average<br />

Year<br />

b Total Assets / Total Loans<br />

Analyzing Table 12 we find out that the specific index it shows large differentiations and variations. We still<br />

observe that the percentage of the specific index for the total of the banks is negative. This negative result effect the<br />

average of 5 year time period.<br />

Table 12 Liquidity (Total Assets / Total Loans)*100 – Index<br />

Bank 2003 2004 2005 2006 2007<br />

Percentage<br />

change<br />

2007-2003<br />

Average of five year<br />

time period<br />

Total banks 205,27 186,22 187,83 175,87 178,20 -13,19 186,68<br />

Source: Union of co-operative banks of Greece (ESTE) / Bank of Greece / self processing<br />

200


Rates<br />

210<br />

200<br />

190<br />

180<br />

170<br />

160<br />

5. Conclusions<br />

2. Liquidity=Total Assets / Total Loans<br />

2003 2004 2005 2006 2007 Average<br />

Year<br />

The Greek banks recognizing the opportunities for increasing their earnings, that arise in the market of the Balkan<br />

countries, after the entrance of some of them in the European Union, just as the gradual accession of the rest, they<br />

took care of expanding their magnitudes with bank buyouts in these countries. From the tables and the data of the<br />

present paper it is understood that the Greek banks have substantially benefited from the cross border mergers and<br />

acquisitions, since they have increased significantly the amount of their deposits and their loans portfolio as well and<br />

have an adequately high ROAE index and of course it remains unknown whether the wave of cross-border mergers<br />

and acquisitions in the banking sector of the Balkan countries will continue at the same rate, given the present<br />

economic crisis.<br />

We believe that in a few years from today the banking map of the Balkans will include a much lesser number of<br />

banks, though of significantly larger size. The Greek banks will need to expand quickly and to acquire the<br />

appropriate size, so as to have a leading part at the next stage of mergers and buyouts in the region after the crisis.<br />

One basic result that comes out from this paper is that the banks in spite of the relatively short period of their<br />

business activity in the Balkans have achieved significant magnitudes and they tried well to covered basic needs of<br />

the local societies.<br />

The estimation of the profitability and efficiency of the banks reaches satisfactory levels. However the course of<br />

liquidity indexes for the banks is going down. The challenges of the Balkans have been recognized by the Greek<br />

Banks and they expand in the Balkan countries. The growing economic strength in the Balkan area, it considered<br />

that will allow the Greek Banks to improve their performance and expand their activities.<br />

6. References<br />

Brakman S., Garretsen H., Marrewijk van C., “Cross-border Mergers and Acquisitions on revealed comparative advantage and<br />

merger waves” January 2008 TI 2008-013/2 Tinbergen Institute Discussion Paper<br />

Kyriazopoulos G., Zissopoulos D., Sariannidis N., «The advantages and the disadvantages of mergers and acquisitions in the<br />

international and Greek economy» ESDO Florina Sept. 2009.<br />

Kyriazopoulos G., Petropoulos D., “Cross Border Mergers and Acquisitions in the Balkan Countries after the introduction of<br />

Euro in Greece” Esdo Kavala June 2010<br />

Kyriazopoulos G., Petropoulos D., “What are the advantages and disadvantages from the banks mergers and acquisitions? Does<br />

Altman’s z-score model for bankruptcy motivate banks for mergers and acquisitions?<br />

Evidence from the Greek banking system”. ICOAE Athens August 2010<br />

Kyriazopoulos G., Petropoulos D., “Does the cross border mergers and acquisitions of the Greek banks in the Balkan area affect<br />

on the course of profitability efficiency and liquidity indexes of them?” EBEEC 2011 Pitesti Romania<br />

Pasiouras F., Tanna S., Zopounidis C., (2007), “The identification of acquisition targets in the EU banking industry:<br />

An application of multicriteria approaches”, International Review of Financial Analysis, <strong>Volume</strong> 16, Issue 3, Pages 262-281.<br />

Petropoulos D., Kyriazopoulos G., “Profitability, efficiency and liquidity of the co-operative banks in Greece” ICOAE Athens<br />

August 2010<br />

Rhoades Stephen A., 1986, "Bank Operating Performance of Acquired Firms in Banking Before and After Acquisition",<br />

Washington Federal Reserve System, Staff Studies, No 149, (May).<br />

Stubos G., Tsikripis I., Banking Sector Developments in South-eastern Europe May 2004<br />

Topaloglou L., University of Thessaly Department of Planning and Regional Development Interaction, Perceptions and Policies<br />

in the Boundaries of European Union: The Case of the Northern Greek Cross Border Zone Economic Alternatives, issue 1,<br />

2008.<br />

Vasiliou D., Financial Management Patra 1999<br />

201


PERFORMANCE OF ISLAMIC BANKS ACROSS THE WORLD: AN EMPIRICAL ANALYSIS OVER<br />

THE PERIOD 2001-2008<br />

Sandrine Kablan, ERUDITE, University of Paris Est Créteil, France.<br />

Ouidad Yousfi, MRM-CR2M, University of Montpellier 2, France.<br />

Email: ouidad.yousfi@univ-montp2.fr<br />

Abstract Our study aims at analyzing Islamic bank efficiency over the period 2001-2008. We use stochastic analysis frontier (SFA) to<br />

estimate efficient frontier. Our results show that Islamic banks in our sample were efficient at 92%. The level of efficiency could<br />

however vary according to the region where they operate. Asia displays the highest score with 96%. Indeed, country like Malaysia<br />

made reforms in order to allow these banks to better cope with the existing financial system, display the highest scores. On the<br />

contrary countries with Islamic banking system do not necessarily display efficiency scores superior to the average. The subprime<br />

crisis seems to have impacted those banks indirectly. And market power and profitability have a positive impact on Islamic banks<br />

efficiency, while it is the contrary for their size. The latter implies that they do not benefit from scale economy, may be because of the<br />

specificity of Islamic financial products.<br />

Keywords: Islamic Finance, Islamic Banks, performance, efficiency, stochastic frontier analysis.<br />

JEL classification: G21, G24, G15.<br />

1. Introduction<br />

The market share of Islamic banks increases by 15% per annum (Moody's, 2008), this last decade. The<br />

emergence and boom of Islamic finance, lead several economists to write on this topic. Many studies discussed in<br />

depth about the rationale behind the prohibition of interest (Chapra, 2000), but also the policy implications of<br />

eliminating the interest payments (see among others Khan, 1986, Khan and Mirakhor, 1987 and Dar 2003).<br />

However, most of the existing literature on Islamic banking unleashes various studies made on the measurement of<br />

performance in Islamic banks: they examine the relationship between profitability and banking characteristics.<br />

A first group of studies are interested in the performance of Islamic banks in a specific country, through<br />

financial ratios. Those ratios capture (a) profitability, (b) liquidity, (c) risk and solvency and (d) efficiency. For<br />

instance, Saleh and Rami (2006) focus on the performance of the first and the second Islamic banks in Jordan:<br />

Jordan Islamic Bank for Finance and Investment (JIBFI) and Islamic International Arab Bank (IIAB). They notice<br />

that they play a major role in financing ventures in Jordan, particularly short-term investment, and both banks have<br />

increased their activities and expanded their investment but, the JIBFI still has higher profitability. They conclude<br />

that Islamic banks have high growth in the credit facilities and in profitability. Samad (2004) focused on the post<br />

Gulf War period of 1990-2001 in Bahrain, and examined the performance of the interest-free Islamic banks and the<br />

interest-based conventional commercial banks. His study shows that there is no major difference between the two<br />

sets of banks in terms of profitability and liquidity performances but there is a significant difference in credit<br />

performance. Kader and Asarpota (2007) evaluate the performance of the UAE Islamic banks by comparing the<br />

Islamic and conventional banks. They examine the balance sheets and income statements of 3 Islamic banks and 5<br />

conventional banks between 2000 and 2004. Their results show that Islamic banks are more profitable, less liquid,<br />

less risky and more efficient than conventional ones. They conclude that the SPL principle (see annex) is the main<br />

reason for the rapid growth of Islamic banks and suggest that they should be regulated and controlled in a different<br />

way as the two kinds of banks have different characteristics in practice.<br />

Again, Samad and Hassan (2000) performed an intertemporal study in which they compared the performance of<br />

the Bank Islamic Malaysia Berhad (BIMB) between two periods of time 1984-1989 and 1990-1997. Then they<br />

evaluate the interbank performance by comparing the BIMB's performance with 2 conventional banks (one smaller<br />

and another larger than the BIMB) as well as 8 conventional banks. The results show that there is a significant<br />

improvement of the BIMB performance between 1984 and 1997 but this improvement is less important than in the<br />

conventional banks. Moreover, Islamic banks are less profitable and less risky but more liquid than conventional<br />

banks. Moin (2008) compared the performance of Islamic banks relatively to conventional banks in Pakistan. The<br />

study makes comparison of Meezan Bank Limited (MBL) which is the oldest Islamic bank in Pakistan and a group<br />

202


of 5 conventional banks for the period of 2003-2007. He adopted an inter-bank analysis of the income statements<br />

and the balance sheets of the two groups. The study found that there is no difference in terms of liquidity between<br />

the two sets of banks. Besides, the MBL is less profitable, more solvent (less risky), and also less efficient<br />

comparing to the average of the conventional banks but it is improving considerably between 2003 and 2007. This is<br />

explained by the fact that the latter banks have a dominating position in the financial market with a longer history<br />

and experience than the Islamic banks in Pakistan which have started their business few years back. Sarkar (1999)<br />

studies the case of Islamic banks in Bangladesh. He finds that Islamic products have different risk characteristics<br />

and concludes that prudential regulation should be modified. Those studies related each to one country and using<br />

financial ratios tend to converge towards one conclusion. Islamic banks may be as efficient as conventional ones;<br />

however there is a necessity of reforms, regulation and control for each banking system where they operate.<br />

A second group of studies are interested in Islamic banks across several countries. Bashir (1999) and Bashir<br />

(2001) examined the balance sheets and the income statements of a sample of 14 Islamic banks in 8 Middle Eastern<br />

countries between 1993 and 1998. He analyzed the determinants of Islamic Banks' performance, specifically the<br />

relationship between the profitability and the banks' characteristics. He found that the measure of profitability is an<br />

increasing function of the capital and loan ratios. Besides, the study highlights the empirical role that adequate<br />

capital ratios and loan portfolios play in explaining the performance of Islamic banks. Factors such as non-interest<br />

earning assets and customer and short-term financing, etc contribute to the increase of the Islamic banks' profit.<br />

Hassan and Bashir (2003) 1 , confirm the results of Bashir (2001) in the sense that the performance of Islamic banks is<br />

affected not only by the bank's characteristics but also by the financial environment. Their results indicate that<br />

controlling for macroeconomic environment 2 , financial market structure, and taxation; the high capital and loan-toasset<br />

ratios improve the banks' performance. The study also provides an interesting but surprising results such as the<br />

positive correlation between profitability and overhead; and the negative impact of the size of the banking system on<br />

the profitability except net on interest margin.<br />

Lastly, the third group of studies is interested in using efficiency frontier methods. Yudistira (2003) analyzed<br />

the impact of financial crises on the efficiency of 18 Islamic banks over 1997-2000. This study is based on a nonparametric<br />

approach Data Envelopment Analysis (DEA). It assesses a technical frontier of efficiency composed of<br />

best practice banks. The efficiency score provided indicates how well a bank transform its inputs in an optimal set of<br />

outputs. He highlighted the small inefficiency scores of 18 Islamic banks as compared to conventional banks. Sufian<br />

(2007) adopted the same approach as Yudistira (2003) to examine the efficiency in domestic and foreign Islamic<br />

banks in Malaysia between 2001 and 2004. He provided evidence that these banks improve their efficiency slightly<br />

in 2003 and 2004. However, domestic Islamic banks are found marginally more efficient than foreign Islamic banks.<br />

Besides Islamic banks profitability is significantly and positively correlated to three different types of efficiency:<br />

technical, pure technical and scale efficiencies.<br />

To measure and analyze technical and cost efficiency of Islamic Malaysian banks, Mokhtar, Abdullah and Al-<br />

Habsh (2006) used the Stochastic Frontier Approach (SFA). Their findings show that, on average, the efficiency of<br />

the overall Islamic banking industry (full-fledged Islamic banks and Islamic windows) has increased during the<br />

period of study while that of conventional banks remained stable over time. However, the efficiency level of Islamic<br />

banks is still lower than that of conventional banks. The study also reveals that full-fledged Islamic banks are more<br />

efficient than Islamic windows 3 for local banks, while Islamic window of foreign banks tend to be more efficient<br />

than those of domestic banks.<br />

All those studies focus on one or a few countries. Our study aims at examining and evaluating the performance<br />

of Islamic banks operating in 17 countries in Middle East, Asia and Africa, but also in United Kingdom. This scope<br />

of analysis will allow us to compare Islamic bank efficiency through the differences characterizing those countries.<br />

We used a stochastic Frontier Approach (SFA) over the period 2001-2008 to estimate a cost-efficiency frontier and<br />

derived scores of cost efficiency, while taking into account explanatory variables.<br />

The current paper is structured in the following way: Section (2) presents the model and the sample. The results<br />

are discussed in Section (3). We conclude in Section (4).<br />

1 They consider a larger sample in 21 countries between 1994 and 2001 and use cross-country bank level data.<br />

2 The Islamic banks seem to have higher profit margins in favorable macroeconomic environment.<br />

3 It refers to conventional banks that offer Islamic financial services, as part of their activity.<br />

203


2. Model specification and Data<br />

In the current study, we retain the parametric approach and use more specifically a stochastic frontier analysis<br />

(SFA). It has the advantage of being more accurate than the nonparametric approach like DEA. As we explained it<br />

before, it allows separating the error term of the inefficiency term. It is therefore less sensitive to measurement<br />

errors and outliers 4 . As objective function, we choose a cost function. It allows taking into account the constraints of<br />

banks as financial companies, seeking to optimize their financial performance. Thereby minimizing the costs<br />

induced by the efficiency frontier, we will take into account this constraint. As functional form, we choose the<br />

Translog, as it best suits the multi-products characteristic of banking technology, involving multiple inputs and<br />

outputs, cf. Mester (1997), Bauer et al. (1998), Roger (1998) and Isik and Hasan (2002). We will assume a seminormal<br />

truncated distribution for the inefficiency term, while the random error follows a normal distribution. We use<br />

the maximum likelihood method for the estimate. Panel data allow us to gain in estimation accuracy by increasing<br />

the number of data. However, our data are unbalanced as some banks are not observed at certain points in time. We<br />

will use the intermediation approach as it assesses bank efficiency as a whole. Besides, the principle of Islamic<br />

banking is participation in the company that is using the funds on the basis of PLS principle. Therefore, the<br />

intermediation approach emphasizes intermediation function carried out by Islamic banks. This led us to determine<br />

as inputs labour, physical capital and deposits. The prices of those inputs are measured respectively by personnel<br />

expenses/total assets (PERSONEXP), other expenses/ total assets (OTHEREXP) and income for deposits/total<br />

deposits (INTERESTEXP). For outputs, we have net loans (LOANS), net liquid assets (LA) and total earning assets<br />

(SECURITIES). This classification is justified by the fact that Islamic banks engage in other types of profitable<br />

activities, since they do not charge interest on loans and deposits (see table 3).<br />

In studies of efficiency measurement for production units, economists usually allow for a second step. It’s set up<br />

to explain the determinants of efficiency. Battese and Coelli (1996) showed that the two-step estimate biases the<br />

efficiency scores. Indeed, the elements used in the second stage to explain efficiency influence its determination in<br />

the first step. Thus, by excluding them from the expression of the efficiency frontier function, the result is a<br />

measurement bias. Battese and Coelli advised therefore to introduce in the frontier function a vector of explanatory<br />

variables. Thus, the computed model will be as equation 1 that follows:<br />

n<br />

t<br />

n m 1<br />

LnCTijt<br />

� � 0 ��<br />

� m ln pm,<br />

ijt ��<br />

� s ln ys,<br />

ijt � ���<br />

m,<br />

n ln pm,<br />

ijt ln pn,<br />

ijt<br />

m<br />

s<br />

2 m n<br />

(1)<br />

t s<br />

n t<br />

1<br />

� �� � s,<br />

t ln ys,<br />

ijt ln yt<br />

, ijt ���<br />

� m,<br />

s ln pm<br />

ln ys,<br />

ijt � zijt<br />

� vijt<br />

� uijt<br />

2<br />

s<br />

t<br />

m<br />

where pm and pn are input prices and ys and yt are outputs quantities. Because of the specific form of the cost<br />

frontier function, we impose constraints on symmetry, αm,n= αn,m and βs,t = βt,s homogeneity in prices Σmαm =1 and<br />

adding-up Σmαm,n =Σnαn,m = Σmδm,s =0 5 . We also consider homogeneity constraints by normalizing total cost, the<br />

labor price and physical capital price by financial capital price. The composite error term also takes a specific<br />

functional form. The random components, vijt are independently and identically distributed according to standard<br />

normal distribution, N (0;σv 2 ) while the bank inefficiency components, uijt > 0 are independently but not identically<br />

distributed according to a truncated-normal distribution. The Stochastic Frontier Analysis assumes that the<br />

inefficiency component of the error term is positive; that is, higher bank inefficiency is associated with higher cost.<br />

The inefficiency of bank i in country j at time t is defined as exp (ûijt) where ûijt is the estimated value of uijt.<br />

However, only the composite error term εijt = vijt - uijt can be observed from estimation of the cost function. The best<br />

predictor of uijt is therefore the conditional expectation of uijt given εijt = vijt - uijt. To retrieve the inefficiency<br />

component from the composite error for each bank from the cost function estimation, we use the method of Jondrow<br />

et al. (1982) to calculate the conditional expectation. To investigate factors that are correlated with bank<br />

inefficiencies, we use the so called conditional mean model of Battese and Coelli (1993, 1995), which permits in a<br />

single-step estimation of the cost function and identification of the correlates of bank inefficiencies. In particular, the<br />

estimation procedure allows for bank inefficiencies to have a truncated-normal distribution that is independently but<br />

4 We use SFA instead of TFA or DFA, because the first one although easy to implement, leads to poor information. DFA requires the assumption<br />

that cost efficiency is time invariant. Besides, when the time period of the panel is short, the random noise terms may not average 0, and<br />

substantial amounts of random noise will appear in the cost inefficiency error component.<br />

5 Homogeneity constraints are imposed by normalizing total costs and costs of two of the three outputs by the price of the third one.<br />

s<br />

204


not identically distributed over different banks. The mean of the inefficiency term is modelled as a linear function of<br />

a set of bank-level variables. Specifically, the inefficiency terms, uijt are assumed to be a function of a set of<br />

explanatory bank-specific variables, zijt and a vector of coefficients to be estimated, θ:<br />

u � z � � w<br />

(2)<br />

ijt<br />

ijt<br />

where the random variable, wijt has a truncated-normal distribution with zero mean and variance, σu 2 . The point<br />

of truncation is -zijt θ so that wijt > -zijt and uijt > 0. The inefficiency component of the composite error term therefore<br />

has a truncated normal distribution, whose point of truncation depends on the bank-specific characteristics so that<br />

the inefficiency terms are non-negative. To estimate the stochastic efficiency frontier, measures of bank inefficiency<br />

and correlates of bank inefficiencies given by Equations (1) and (2), we use the Frontier econometric program<br />

developed by Coelli (1996).<br />

The variables influencing efficiency and therefore enabling to explain it, are related to characteristics of the<br />

banking firm and its production process, as well as the environment in which banks operate. The size of the bank has<br />

often been used in the literature as determinants of efficiency. Allen and Rai (1996) showed that large banks can<br />

take advantage on economies of scale by sharing costs in the production process. It is measured by the logarithm of<br />

total assets. The same authors and more specifically, authors that have worked on Islamic banks such as Yudistira<br />

(2003) take into account regulatory and competitive conditions under which banks operate. Thus a variable used to<br />

catch profitability of banks is measured by net income/total assets (ROA) (or net income/equity (ROE)); and for<br />

Risk Taking Propensity we used the ratio equity/total assets. Indeed, Islamic banks refrain from charging interests<br />

on loans and deposits to devote themselves to the principle of PLS. This redefinition of the banking practices lead to<br />

new risks that conventional banks do not incur. Hence, there is a double interest here in our study to assess the<br />

impact of their risk taking propensity on efficiency (see table 3).<br />

Another variable that could have an impact on efficiency is the market share. It is measured by the ratio of total<br />

deposits of the bank/total deposits in the whole banking system. It can increase costs for the banking system in<br />

general because it results in slacks and therefore inefficiency that can not be solved. However, it can have a positive<br />

impact on efficiency, if it is the result of consolidation and market selection of the largest and most efficient banks.<br />

It appears therefore through lower costs, providing the market is contestable. The GDP per capita is a proxy of the<br />

level of development. It influences many factors related to demand and supply of banking services, mainly deposits<br />

and loans. Therefore, countries with a higher level of development are supposed to have more developed banking<br />

system, with more competitive interest rates and profit margins. Demand density for banking products (measured by<br />

deposits per square kilometre), has a negative impact on costs. In countries with high demand density, banks support<br />

lower costs in the distribution of banking products. Again, the provision of banking services may be affected by<br />

population density. In countries where this variable is low, banking costs are higher and banks are not encouraged to<br />

increase their efficiency. We will test wether those variables are significant or not, according to their relative<br />

correlation. We used data from balance sheets and income statements in their standard universal version of Database<br />

Bankscope. The values of the variables are expressed in current dollars and have been deflated by the consumer<br />

price index of the current year in order to reflect macroeconomic differences among countries. The macroeconomic<br />

variables come from International Financial Statistics, from the IMF, available through DataStream. Total deposits<br />

in each country for the calculation of market power were converted into dollars using market exchange rate end of<br />

period.<br />

3. Summary<br />

Our paper provides the following results: Islamic banks are efficient at 92,72% on average over the period 2001-<br />

2008. This efficiency differs depending on the region with maximum efficiency displayed by Islamic banks<br />

operating in Asia (96, 21%). This level reflects the strong performance of Malaysian and Pakistani banks that<br />

constitute most of our Asian sample. Malaysia in particular is emerging as one of the most developed centers in<br />

Islamic finance, after Iran and Saudi Arabia. Since 1975 the government has reformed the financial system, so that it<br />

promotes the development of Islamic banking alongside conventional finance. This was especially possible through<br />

the Malaysian Islamic Banking Act of 1983. For the Pakistani case, since 1978, the government has fostered the<br />

transformation of the banking system through the constitution of Commission for Transformation of Financial<br />

System (CTFS), and the establishments of Islamic Banking Department by the State Bank of Pakistan. Thus the<br />

government accompanied and framed this transformation through an appropriate regulatory system. Besides, Islamic<br />

205<br />

ijt


anks operating in Africa displayed an average efficiency score of 93,34%, with 92,75% for Sudan whose banking<br />

system is essentially Islamic (government legislation). On the other hand, Islamic banks operating in United<br />

Kingdom have an efficiency of 93,25%. This country has made efforts to include in its banking regulation, specific<br />

rules for Islamic banks to better exercise in the British environment. Therefore, those banks attract capital from<br />

Muslim immigrants and also petrodollars from the Middle East seeking investment opportunities. Finally, the<br />

Middle East region has an efficiency score of about 92,49%. Especially, Iran which banking system is essentially<br />

Islamic (government legislation) displays an average efficiency of 94,38%. As a whole, Islamic banks efficiency has<br />

a decreasing trend over the period of analysis with a peak in 2005. It may be associated with the war in Iraq that<br />

began in 2004, which gave rise to an oil shock. Petrodollars’ inflows into Islamic banks could explain this peak.<br />

Besides, the lowest levels of efficiency appear in 2007 and 2008. This period corresponds to the subprime crisis.<br />

Therefore we wanted to check its impact by using a dummy variable (D_subprime). However, it was not significant<br />

(table 5, regression iv). Despite the decrease of Islamic banks efficiency during this period, we cannot assert that<br />

this is due to the crisis.<br />

To conclude, Islamic banks have expanded significantly in recent years because of increasing petrodollars<br />

inflows, following the oil shocks. These banks are growing at a rate of 15% per year since the early 2000s. And<br />

wherever they settle, the authorities try to implement adequate regulation in order to enable them to integrate the<br />

banking system of these countries. It is within this context that our study is inserted to measure and understand what<br />

explains the efficiency of These Islamic banks. At this purpose, we use the method of stochastic frontier in one step<br />

(Battese and Coelli, 1996). This allows us to integrate in the cost frontier explanatory variables of efficiency. Thus,<br />

this study shows that size has a negative impact on efficiency indicating that because they distribute Islamic<br />

financial services, they may not benefit from economies of scale.<br />

Profitability has a positive impact on efficiency, which is consistent with the literature. Finally, market power<br />

has a positive impact on efficiency, the more clients Islamic banks have, the more efficient they are. Furthermore,<br />

our study shows that in general Islamic banks are efficient with an average of 92,72%. However there are<br />

differences across regions. Banks operating in countries with an Islamic banking system are not necessarily the most<br />

efficient. The most efficient region is Asia (96%), with Pakistani and Malaysian banks. However operating in a<br />

country where Islamic banking is government legislation or in Middle East is less costly for Islamic banks. Another<br />

result observed is the decrease in efficiency at end of period (2007-2008). Although this period corresponds to the<br />

subprime crisis we found no evidence that this decrease was due the crisis. A deepening of this study is to measure<br />

the efficiency of Islamic banks in relation to conventional banks.<br />

4. References<br />

Al-Jarrah, I. and Molyneux, P. (2003), “Cost Efficiency, Scale Elasticity and Scale Economies in Arabian Banking”<br />

paper presented in 10 th Annual Conference of Economic Research Form For the Arab Countries, Iran and<br />

Turkey in Morroco 16-18 December 2003.<br />

Allen, L. and Rai A., 1996 “Operational Efficiency in banking: An international comparison”, Journal of banking<br />

and finance 20, 655-672.<br />

Battese, G.E. and Coelli, T.J. (1993), “A stochastic frontier production incorporating a model for technical<br />

inefficiency effects”, Working Papers in Applied Statistics, No. 65, Department of Economics, University of<br />

New England, NSW, Australia.<br />

Battese, G.E. and Coelli, T.J. (1995), "A Model for Technical Inefficiency Effects in a Stochastic Frontier<br />

Production Function for Panel Data," Empirical Economics 20(2), 325-32.<br />

Battese, G.E. and Coelli, T.J. (1996), "Identification of Factors Which Influence The Technical Inefficiency Of<br />

Indian Farmers," Australian Journal of Agricultural Economics, Australian Agricultural and Resource<br />

Economics Society 40(02).<br />

Bashir, A.H. M., (1999), “Risk and Profitability Measures in Islamic Banks: The Case of Two Sudanese Banks,”<br />

Islamic Economic Studies 6, 1–24.<br />

Bauer, P.W., Berger, A.N., Ferrier, G.D. and Humphrey, D.B. (1998), "Consistency Conditions for Regulatory<br />

Analysis of Financial Institutions: A Comparison of Frontier Efficiency Methods," Journal of Economics and<br />

Business 50(2), 85-114.<br />

Berger, A.N. and Humphrey D.B. (1997), “Efficiency of financial institutions: International Survey and Directions<br />

for Future Research”, European Journal of Operational Research 98, 175-212.<br />

206


Berger, A. N. and Mester L. (1997), “Inside the black box: What explains differences in the efficiencies of financial<br />

institutions?”, Journal of Banking and Finance 21, 895-947.<br />

Berger, A., Hunter W., Timme S. (1993), “The Efficiency of Financial Institutions: A Review and Preview of<br />

Research Past, Present, and Future”, Journal of Banking and Finance 17(2-3), 221 249.<br />

Chapra, M. U. (2000), “Why Has Islam Prohibited Interest? Rationale Behind the Prohibition of Interest”, Review of<br />

Islamic Economics, 9, 5–20.<br />

Coelli, T. (1996), “A guide to Frontier Version 4.1: A computer program for stochastic Frontier production and cost<br />

function estimation”, CEPA Working paper.<br />

Cebenoyan, S., Cooperman, E. and Charles, R. (1993). "Firm Efficiency and the Regulatory Closure of S & Ls: An<br />

Empirical Investigation", The Review of Economics and Statistics 75, 540-545.<br />

Cihak, M. and Hesse, H. (2008), "Islamic Banks and Financial Stability: An Empirical Analysis," IMF Working<br />

Papers 08/16, International Monetary Fund.<br />

Dar, H. (2003), Handbook of International Banking, Edward Elgar, chap. 8.<br />

Hasan, Z. (2005), “Islamic Banking at the Crossroads: Theory versus Practice”, in “Islamic Perspectives on Wealth<br />

Creation” edited by M. Iqbal M. and R. Wilson, Edinburgh University Press, UK pp. 3-20.<br />

Hasan, I., Lozano-vivas A. and Pastor J., 2000, “Cross-border performance in European Banking”, Bank of Finland<br />

Discussion Papers 24.<br />

Hassan, M. K. (2003), Textbook on Islamic Banking, Dhaka, Bangladesh: Islamic Economic Research Bureau.<br />

Isik, I. and M.K. Hassan, (2002), “Technical, scale and allocative efficiencies of Turkish banking industry”, Journal<br />

of Banking and Finance 26, 719-766.<br />

Kader, J.M., A.J. Asaporta, and Al-Maghaireh, A. (2007), “Comparative Financial Performance of Islamic Banks<br />

vis-à-vis Conventional Banks in the UAE.”, Proceeding on Annual Student Research Symposium and the<br />

Chancellor’s Undergraduate Research Award, http://sra.uaeu.ac.ae/CURA/Proceedings.<br />

Mester, L.J., (2007), "Some thoughts on the evolution of the banking system and the process of financial<br />

intermediation," Economic Review 1(2), 67 – 75.<br />

Moktar, H.S., N. Abdullah and S.M. Al-Habshi (2006), “Efficiency of Islamic Banks in Malaysia: A Stochastic<br />

Frontier Approach,” Journal of Economic Cooperation among Islamic Countries 27 (2), 37–70.<br />

Rogers, K.E. (1998), "The non traditional activities and the efficiency of US commercial banks", Journal of banking<br />

and Finance 22, 467-482.<br />

Samad, A. (2004), “Bahrain Commercial Bank’s Performance during 1994-2001”, Credit and Financial<br />

Management Review 10 (1), 33-40.<br />

Samad, A. and M.K. Hassan (2000), “Performance of Islamic Bank during 1984-1997: An exploratory study.”<br />

Thought on Economics 10 (1), 7-26.<br />

Samad, A. (1999), “Relative Performance of Conventional banking vis-à-vis Islamic Bank in Malaysia.” IIUM<br />

Journal of Economics and Management 7 (1), 1-25.<br />

Sarker, M.A. (1999), “Islamic Banking in Bangladesh: Performance, Problems, and Prospects,” International Journal<br />

of Islamic Financial Services, 1.<br />

Yudistra, D. (2004), “Efficiency in Islamic Banking: An Empirical Analysis of Eighteen Banks,” Islamic Economic<br />

Studies 12 (1), 1–19.<br />

207


THE DETERMINANTS OF THE EAD<br />

Yenni Redjah & Jean Roy, HEC Montréal<br />

Inmaculada Buendía Martínez, Universidad Castilla-La Mancha & HEC Montréal<br />

Email: jean.roy@hec.ca<br />

Abstract. Defaults by individuals were at the source of the recent financial crisis, thus the need to fully understand credit risk from<br />

personal borrowers. Expected loss from credit is usually decomposed in probability of default, loss given default and exposure at<br />

default (EAD), the latter factor being yet the least investigated. This research seeks to contribute by identifying the determinants of<br />

EAD. This empirical study took place at a large Canadian financial cooperative having some 150B$ in assets. It is worth noting that<br />

Canada in particular and financial cooperatives in general, exhibited great resiliency during the crisis. The sample consisted of more<br />

than 11000 cases of default occurring between 2003 and 2008 on revolving lines of credit granted to individuals. The independent<br />

variables consisted of 11 idiosyncratic and 10 macroeconomic variables. The methodology followed basically that of Jacobs (2008),<br />

which was applied to corporate loans. The results show that several factors are significant, namely the borrower’s age, the exposure<br />

limit, the amount drawn, the interest rate applied on the line of credit and the utilization behavior. Moreover, the relationship of EAD<br />

to macroeconomic factors points to its procyclicality. Overall, more than 50% of the variance of EAD can be explained. In sum, the<br />

research sheds light on a credit factor, EAD on credits to individuals, which has remained rather obscure up to now. The improved<br />

understanding of EAD can lead to better risk modeling, better credit management and, potentially, improve financial stability.<br />

Keywords: Exposure at default, Credit risk, Provision for loan losses, Bank capital<br />

JEL classification: G21<br />

1 Introduction<br />

Credit exposure is the most important risk assumed by financial institutions with regards to their traditional<br />

activities. The recent crisis is an illustration if we consider how subprime loans were granted and managed. During<br />

the past decades credit risk has inspired many researches led by practitioners, academics as well as regulators. These<br />

latest, through the Basel Committee, promote sound credit risk management practices to insure the stability and<br />

solvency of the financial system. In the second Basel Accord a formula emerged for the loss provision calculation<br />

which put in relation the probability of default (PD), the loss given default (LGD) and the exposure at default<br />

(EAD). Given that the latter factor is the least investigated, this research seeks to contribute by identifying the<br />

determinants of EAD for household accounts.<br />

The focus will be made here on the revolving credit lines granted to individuals contrasting with past empirical<br />

studies on EAD for corporate loans. The data set includes 11278 defaults between 2003 and 2008 that have been<br />

recorded by a large Canadian financial institution. The exposure at default will be tested on macroeconomic and<br />

idiosyncratic variables using a stepwise approach. Several variables turned out to be significant in explaining the<br />

EAD such as the utilization ratio before default, the authorized and the utilized amounts, the age, the interest rate<br />

charged on the credit line, the 3 month risk free rate and the TSE300 index. These last two variables will support the<br />

hypothesis that the EAD is procyclical.<br />

The rest of the paper is divided as follow. The first section summarizes the literature dedicated to the EAD<br />

factor. The second section presents the data base and the descriptive statistics. The third section presents the<br />

methodology, the results and the residual analysis. A conclusion will close the paper.<br />

2 Literature review<br />

The literature dedicated to the EAD can be divided into three segments. Firstly, there is the documentation provided<br />

by the BIS, through the Basel committee, which gives the qualitative and mathematical framework regarding the<br />

EAD estimation. Secondly, several theoretical papers, published mainly before 2006, were used by the committee as<br />

consultative papers to build the regulatory framework. Finally, we can find a few empirical studies on EAD,<br />

published for the most after 2006, which can be used to compare the results obtained here. A brief summary of this<br />

literature is presented in the two following sub-sections.<br />

208


2.1 The Basel II Accord and the EAD<br />

Under Basle II, the EAD is a key factor for the estimation of both the loss provision that appears in the balance sheet<br />

and the regulatory capital that the financial institutions have to maintain. Together, they provide a confidence level<br />

of 99.9% for the bank solvency. Thus, they cover the quasi-totality of the loss distribution. Two formulas have then<br />

been developed to estimate respectively the expected loss for which a provision is estimated (equation 1) and the<br />

regulatory capital (equation 2). These two equations show the importance of having an accurate estimation of the<br />

EAD with regard to their linear impact on these two measures.<br />

The EAD can be estimated in two ways using the Loan Equivalent Factor (equation 3) or the Credit Conversion<br />

Factor (equation 4). The LEF represents the portion of the unutilized line that is expected to be drawn before default<br />

(Sthephanou & Mendoza, 2005). The CCF is defined by Jiminez, Lopez and Saurina (2006) as the fraction of the<br />

total commitment at time t that will have been drawn when the borrower reaches the default time. The CCF can be<br />

seen as a utilization ratio of the credit line. The regulators recommend using the CCF instead of the LEF to estimate<br />

the EAD which is why we choose the first as our proxy. This factor will then be our dependent variable. Given that<br />

the EAD is usually expressed in monetary terms, we only have to apply the equations 5 or 6 to obtain its estimation.<br />

E(<br />

L)<br />

� PD*<br />

LGD*<br />

EAD<br />

K<br />

�<br />

2.2 Empirical studies of the EAD<br />

�1� �1<br />

�1<br />

�LGD �����<br />

( PD)<br />

� � ��<br />

( �)<br />

��<br />

1�<br />

� ��<br />

PD � LGD��<br />

EAD�2�<br />

LEF ( � ) �<br />

i,<br />

t<br />

CCF ( � ) �<br />

i,<br />

t<br />

EAD ( � ) �<br />

i,<br />

t<br />

EAD ( � ) �<br />

i,<br />

t<br />

�Drawni , t ��<br />

� Drawni<br />

, t �� Unutili<br />

, t �3� �Drawni , t � LEFi<br />

, t ( � ) �Unutili<br />

, t �� Total _ Commitment � EADi<br />

, t ( � ) � �Drawni , t �Unutili<br />

, t ��4� Drawni<br />

, t � LEFi<br />

, t ( � ) �Unutili<br />

, t �5� CCF ( � ) �(<br />

Drawn �Unutil<br />

) �6� i,<br />

t<br />

i,<br />

t<br />

i,<br />

t<br />

Even if studies on EAD are scarce, we can still point out some interesting results. Jiminez, Lopez and Saurina<br />

(2006) found a clear difference between non-defaulted and defaulted loans regarding the utilization ratio. In fact, for<br />

a given date of default, the non-defaulted borrowers have maintained a ratio of 50% during the five previous years.<br />

The second group, composed by the defaulted loans, showed a ratio of 60% five year before and raised monotically<br />

to reach the 90% mark at the year of default. In another paper, Jacobs (2008) implemented a simultaneous<br />

regressions model with three different dependent variables (LEF, CCF and EADF) for which the same set of<br />

independent variables was applied. The CCF turned out to be the more efficient model with regard to the R². The<br />

key findings were that the EAD risk decreases with the PD, the credit quality and with higher levels of leverage and<br />

liquidity. On the other side, the EAD risk increases with the company size and for unsecured loans. Finally,<br />

subordinated debt induces more hazardous EAD than senior debt. Concerning the constitution of an EAD database,<br />

Gruber and Parchert (2006) argue that unlike the PD which requires considering the entire portfolio of loans, the<br />

EAD needs a data base including only the defaulted loans.<br />

Regarding the procyclicality of the EAD, Asarnow and Marker (1995) found that for a sudden deterioration of<br />

the economic condition or the credit quality, the utilization of the normally unutilized portion of the loan increases<br />

faster for borrowers with a good credit rating. They explain it by the fact that less secure companies are facing more<br />

monitoring by banks. Thus, the procyclicality in the EAD is driven primarily by the obligors classified among the<br />

credit grades. Stephanou and Mendoza (2005) adopted the same point of view and assumed that the obligors that<br />

have a good credit rating make default following a sudden deterioration of their financial situation which could be<br />

possibly due to an unexpected economic downturn. On the other hand, more risky borrowers make default after a<br />

gradual worsening of their solvency. In a broader perspective, Allen and Saunders (2003) explain that banks reduce<br />

the volume of loans during economic slowdown at a moment where companies need it the most to overcome<br />

liquidity problems. Other empirical studies such as Jacobs (2008), Kim (2008), Jiminez and Mancia (2007), Jiminez<br />

and Al (2006) all find several evidences that the EAD follows a cyclical pattern. The number of default increases<br />

during an economic crisis. The values of the CCF and LEF also reach their peak at these same moments. The<br />

variables that explain the most this phenomena are the GDP, the three month risk free rate, the growth rate of real<br />

estate prices and finally the S&P 500 index. However, we have to be cautious when comparing our results to these<br />

past studies since the populations studied are not of the same type. In fact, we concentrate here on loans granted to<br />

individuals while the past literature focuses on corporate loans.<br />

209


3 Database and descriptive statistics<br />

Before analyzing the data, let us see how the defaults are recorded. Thus, defaults are identified for individuals who<br />

received a tracking code, borrowers for whom the interest rate was reduced and those who are 90 days late on their<br />

payments. Also, defaults include restructured loans, non performing loans and loans for which the guarantee was<br />

taken over by the institution. The following sections present the data and variables, and describe them statistically.<br />

3.1 The database<br />

The database includes 11278 cases of default between 2003 and 2008 on revolving credit lines granted to individuals<br />

in the province of Quebec. A large set of variables have been taken into account. The idiosyncratic variables include<br />

the age of the borrower, the utilization ratio twelve months before default (12 variables), the interest rate charged on<br />

the credit line, the exposure limit, the drawn amount, the credit score and a large set of dummies including the types<br />

of credit line (7), collateral (6), interest rate (3), credit rating (9) and years (6). The macroeconomic variables (8)<br />

include indicators on interest rates (the 3 months Canadian Treasury Bill and the bank rate), the real estate market<br />

(the 5 year Canadian mortgage rate, the REIT index and the Canadian New Housing Starts), consumption (spending<br />

on durable goods) and on the overall performance of the economy (GDP and the TSE300 index). In addition of their<br />

growth rate, the face value of the 3 month T-bill and the 5 year mortgage rate were also added.<br />

3.2 Descriptive statistics<br />

We can now take a closer look at some selected variables. Table 1 presents the annualized observations of the CCF,<br />

calculated by equation 4, and the averages of the idiosyncratic variables. Firstly, we can see that the exposure limit<br />

and the drawn amount have grown monotonically which amplify theoretically the EAD risk, but instead the CCF<br />

reach his minimum during the crisis (2007, 2008) which could imply a bigger monitoring of the institution. The<br />

credit rating have also decline, which could be a sign that the credit portfolio managers wanted to deal with more<br />

secure borrowers, or, in accordance with Stephanou & Mendoza (2005), credit grade obligors are responsive to a<br />

deterioration of the economy. Globally, the CCF is 0,7887 with relatively fine parameters of skewness and kurtosis.<br />

However, the bimodal aspect of the CCF distribution, with peaks at 0 and 1, indicates that it has not a Normal shape.<br />

2003 2004 2005 2006 2007 2008 TOTAL<br />

Obs.¹ 479 899 1394 2089 2362 4055 11278<br />

Proportions 0,0425 0,0797 0,1236 0,1852 0,2094 0,3595 -<br />

CCF ² 0,8035 0,8496 0,7938 0,8050 0,7866 0,7647 0,7887<br />

StdDev 0,3489 0,3104 0,3644 0,3495 0,3520 0,3543 0,3515<br />

Skewness -1,5769 -2,0319 -1,4986 -1,5636 -1,4323 -1,2853 -1,4529<br />

Kurtosis 3,8371 5,5830 3,5052 3,7980 3,4351 3,0721 3,4903<br />

Exposure Limit ($) 5 945,15 6 444,53 6 777,98 10 176,74 13 512,38 14 714,06 11 609,40<br />

StdDev ($) 8 710,56 11 508,61 14 236,32 29 359,39 31 475,24 36 625,92 29 987,94<br />

Drawn Amount ($) 4 733,30 5 147,52 5 132,89 7 564,09 9 335,13 10 267,03 8 293,48<br />

StdDev ($) 8 329,50 8 603,45 12 338,06 25 613,64 23 159,89 28 695,96 23 704,05<br />

Interest Rate 0,0948 0,0874 0,0913 0,1012 0,1057 0,0937 0,0969<br />

StdDev 0,0285 0,0293 0,0283 0,0307 0,0327 0,0326 0,0319<br />

Credit Rating 5,42 5,59 5,46 5,38 5,05 5,25 5,29<br />

Mode 6 7 7 6 6 7 7<br />

Age 36,84 37,00 41,10 37,11 38,46 39,31 38,66<br />

1 - Defaults on individuals credit lines between 2003 and 2008<br />

2 - CCF = Sum[ Drawn at the default time ÷ Exposure Limit] ÷ Observations<br />

Table 1: Annualized descriptive statistics: idiosyncratic variables<br />

Table 2 shows that the resulting CCF is in line with the literature since it is constantly increasing as the<br />

borrowers reach the default moment. The fact that the CCF is smaller at the time of default is probably a second<br />

evidence of a stronger monitoring by the financial institution who tries to recover as much as it can to minimize the<br />

default size. Logically, we obtain a utilized amount that grows linearly while the exposure limit remains stable.<br />

210


Months to Default Obs. CCF Std Dev Skew Kurt Exp. Lim. Std Dev Drawn Std Dev<br />

0 11278 0,7887 0,3515 -1,4529 3,4903 11 609 29 988 8 293 23 704<br />

-1 11232 0,8829 0,2361 -2,1959 6,9024 11 726 30 431 9 178 24 456<br />

-2 11177 0,8743 0,2435 -2,1025 6,4484 11 773 30 596 9 081 24 191<br />

-3 11119 0,8681 0,2491 -2,0370 6,1357 11 947 31 226 9 046 24 360<br />

-4 10884 0,8527 0,2628 -1,9429 5,7570 12 140 31 811 8 895 24 313<br />

-5 10622 0,8397 0,2738 -1,8264 5,2737 12 133 31 834 8 706 24 113<br />

-6 10338 0,8268 0,2857 -1,7502 4,9873 12 164 32 223 8 465 24 081<br />

-7 10046 0,8122 0,2975 -1,6109 4,4311 12 130 32 227 8 241 23 782<br />

-8 9790 0,8011 0,3076 -1,5447 4,1680 12 123 32 381 8 082 23 615<br />

-9 9495 0,7931 0,3147 -1,4876 3,9548 12 120 32 458 7 920 23 434<br />

-10 9205 0,7816 0,3241 -1,4117 3,6844 12 109 32 617 7 687 22 998<br />

-11 8901 0,7764 0,3275 -1,3867 3,5999 12 085 32 676 7 523 22 737<br />

-12 8611 0,7693 0,3315 -1,3434 3,4726 12 063 32 969 7 361 21 998<br />

Table 2: Descriptive statistics: Evolution of the CCF 12 months before default<br />

Since the credit score is a key element in measuring the credit quality and the probability of default, Table 3<br />

presents the descriptive statistics regarding this variable. First of all, we see that the traditional relationships that we<br />

can expect are holding between the CCF and the credit rating. The majority of the observations were rated between<br />

6 and 8 which are the less secure borrowers. The worst score here is in fact 8 since 9 is an unclassified category. The<br />

exposure limit could be a technique for this financial institution to manage its credit risk. In fact, the total<br />

commitment size decreases with the credit quality. The age follows the same trend, where good ratings are assigned<br />

to much older borrowers. High rated obligors are also benefiting from low interest rates that increase linearly until<br />

the fifth credit score. From there, interest rates start to decrease, potentially for monitoring purposes.<br />

1 2 3 4 5 6 7 8 9<br />

Observations 387 847 1422 1506 1362 1933 2032 1524 264<br />

Proportions 0,0343 0,0751 0,1261 0,1335 0,1208 0,1714 0,1802 0,1351 0,0234<br />

CCF 0,5579 0,7122 0,7679 0,8001 0,7993 0,8018 0,8142 0,8376 0,7907<br />

StdDev 0,3977 0,3834 0,3664 0,3446 0,3474 0,3479 0,3378 0,3062 0,3533<br />

Skewness -0,2221 -0,9579 -1,3174 -1,5611 -1,5416 -1,5568 -1,6552 -1,8812 -1,4514<br />

Kurtosis 1,4198 2,2927 3,0577 3,8254 3,7366 3,7967 4,1238 5,1395 3,5047<br />

Age 50,2 46,7 42,5 39,9 37,6 35,8 34,6 33,8 32,1<br />

Exposure Limit ($) 32 414 24 554 16 227 12 161 8 393 10 123 7 324 6 641 4 700<br />

Drawn Amoount ($) 16 019 15 136 11 255 8 988 6 311 7 748 5 799 5 711 3 402<br />

Interest Rate 7,9555 8,8141 9,5839 10,1124 10,3359 10,1992 10,2339 9,9325 9,2356<br />

Table 3: Descriptive statistics: Distribution of the CCF by credit rating<br />

Finally, Table 4 shows the information regarding macroeconomic variables. The CCF is negatively correlated<br />

with all the variables, specially the TSE300 index, GDP and expenses on durable goods, meaning that the EAD risk<br />

increases in bad economic environment. Here are the first signs of procyclicality in the exposure at default.<br />

Growth Rate Std Dev Correl. vs CCF P-Value<br />

3 months T-Bill -0,0087 0,1045 -0,1957 0,0995<br />

Bank Rate -0,0053 0,0645 -0,3097 0,0081<br />

5 year Mortgage Rate 0,0001 0,0265 -0,3753 0,0012<br />

Gross Domestic Product 0,0039 0,0045 -0,4613 0,0000<br />

Expenses on Durable Goods 0,0020 0,0065 -0,4939 0,0000<br />

New Housing Starts 0,0183 0,1870 -0,1019 0,3945<br />

S&P/TSE 300 Index -0,0012 0,0485 -0,4165 0,0003<br />

S&P/TSX REIT Index 0,0053 0,0418 -0,2464 0,0369<br />

Table 4: Descriptive statistics: Monthly Canadian macroeconomic variables from 2003 to 2008<br />

211


4 Methodology and results<br />

First of all, this chapter explains the methodology used to test empirically the EAD, which was inspired by previous<br />

studies such as Jacobs (2008) and Jiminez et al. (2006). The results of the models follow in the nest section. Finally,<br />

the residual analysis ends the section.<br />

4.1 Methodology<br />

The model developed here consists of a linear combination of idiosyncratic and macroeconomic variables to explain<br />

the CCF which is our proxy for the EAD. Similarly to the studies mentioned above, we will also assume that the<br />

CCF is normally distributed, which could appear as a strong hypothesis with regard to the bimodal aspect of the<br />

distribution. Thus, we have opted for a Stepwise approach which is an appropriate choice for two main reasons.<br />

Firstly, it admits overidentified models. Secondly, it is appropriate when no theoretical basis exists in the literature<br />

that determines specific models to implement.<br />

The stepwise procedure uses an iterative method that adds or removes variables to build a final model that<br />

minimizes the estimation error or maximizes the coefficient of determination by choosing the optimal set of<br />

independent variables. At each iteration (corresponding to a potential model), the program calculates the residual<br />

sum of squares. For each model, the stepwise application computes the F-statistic which is obtained by comparing<br />

the residual sum of squares of the initial model and the model after removing a variable. If the statistical significance<br />

is not improved, the variable is maintained only if it is sufficiently significant (significance level of 10%). At the<br />

end, only the variables that are significant at the 5% level will be included in the final model.<br />

The observed CCF is our dependent variable. As Jiminez et al. (2006), we applied a logarithmic transformation<br />

of the dependent variable to resolve a problem of heteroskedasticity. The fifty eight independent variables (thirtyone<br />

dummies) included in the regression model are those listed in the section 3.1. The general equation that is<br />

th<br />

implemented is as follow, where CCF is the percentage exposure at time of default� for the i observation.<br />

Log( CCFi<br />

( � )) � � � � j * IdiosyncVarij<br />

� �k<br />

* MacroVark<br />

� �t<br />

( 7)<br />

The stepwise method implies that the variables interact with each other creating a number of iterations that<br />

grows exponentially with the number of variables. Also, depending on the order of the variables in the initial model<br />

the stepwise application can lead to different optimal solutions. Thus, running this program leads to a solution that is<br />

locally optimal. In this paper, tests have been made to insure that the solutions that are presented are stable and<br />

unique.<br />

4.2 Results<br />

For brevity sake, Table 5 gives the outcome of the final model resulting from the Stepwise application. Thus, among<br />

the 58 variables that were included in the initial model, one in four reached the 5% significance level mark. At first<br />

sight, one notices that the number of observations dropped to 11140 due to the logarithmic transformation of the<br />

dependent variable. First we see that there are 14 variables that enter the final model, combining for a R² of 51.4%.<br />

The F-Statistic shows that the model is reliable. The age is negatively related to the CCF meaning that younger<br />

borrowers carry more risk. The utilization ratio one month before the time of default is certainly the variable that<br />

explains the most the CCF with regard to its significance level. So even if default is primarily due to a progressive<br />

degradation of the CCF (Table 2), the true value of the EAD can be known only one month before default with some<br />

confidence. The two variables UR-2 and UR-4 have a much lower importance due to the small value of their<br />

coefficient.<br />

In line with the findings of Jiminez et al. (2006) we found that the exposure limit and the drawn amounts are<br />

good predictors of the EAD. In fact, the more the financial institution authorizes a substantial limit, the smaller<br />

would be the exposure at default in percentage. However, this result must not be that surprising as we have seen that<br />

the highest exposure limits were granted to the good quality creditors.<br />

212


The interest rate charged on credit lines has also shown a significant and positive impact on the CCF. Thus,<br />

when the interest rate increases, the EAD increases since payments on the credit line become bigger. In the other<br />

hand, dummy variables are not vital here. For the type of products for example, if we combine the number of<br />

observations of the three variables that turned out to be significant we obtain only 15% of the population, which<br />

does not contribute notably to the global explanation of the EAD. Same goes for the collateral for which we have<br />

only 22 data points.<br />

Coefficient Std Dev T-Stat P-Value<br />

Age -0,0010 0,0002 -5,9130 0,0000<br />

Dum Credit Line Type: Personnal 0,0318 0,0123 2,5928 0,0095<br />

Dum Credit Line Type: Student 0,4494 0,0606 7,4168 0,0000<br />

Dum Credit Line Type: Protected 0,1165 0,0585 1,9903 0,0466<br />

Utilisation Ratio -1 month 2,0976 0,0734 28,5746 0,0000<br />

Utilisation Ratio -2 months -0,1849 0,0787 -2,3477 0,0189<br />

Utililisation Ratio -4 months -0,1350 0,0355 -3,8028 0,0001<br />

Dum Collateral: Specul. Res. Mrtg. -0,7138 0,1223 -5,8381 0,0000<br />

Dum Interest Rate: Floatting -0,1317 0,0330 -3,9922 0,0001<br />

Interest Rate Charged 0,0051 0,0018 2,8590 0,0043<br />

Exposure Limit -1,15E-05 5,81E-07 -19,7620 0,0000<br />

Drawn Amount 1,34E-05 6,34E-07 21,0641 0,0000<br />

Delta 3 months T-Bill 0,1047 0,0380 2,7549 0,0059<br />

Delta S&P/TSE300 -0,3003 0,0957 -3,1399 0,0017<br />

Model Statistics<br />

R² 0,5140 Model F-Stat 641,56<br />

Estimator Std Dev 0,4317 Observations 11140<br />

Table 5: Final model results. Independent variable = Log(CCF) for individual credit lines from 2003 to 2008<br />

Finally, we can see that the CCF is driven by two macroeconomic variables, the 3 month T-Bill and the<br />

S&P/TSE 300 index. As we have seen earlier, interest rates are positively related to the CCF, meaning that a<br />

growing interest rate increases the credit risk. In fact, financial charges becoming more important, liquidity issues<br />

could emerge, which might result in a default. The TSE 300 index shows that the CCF is correlated negatively with<br />

the economic cycle. Thus, the EAD risk is more important during economic downturns.<br />

These findings confirm the results of the previous studies on the EAD even if they focused on corporate loans.<br />

In fact, utilization behavior of the credit line, the exposure limit, reaction to the economic cycle were all confirmed<br />

and moreover with the same coefficient signs. In addition, we found for the first time that the interest rate charged<br />

on credit line has a role to play on the EAD fluctuation. The age also explains the EAD with the traditional<br />

relationship that we expect from this variable, which is negative meaning that younger borrowers carry more risk.<br />

4.3 Residual analysis<br />

The model implemented here respects the Gauss-Markov theorem, and then we can state that it is the best linear<br />

unbiased estimator (BLUE). In fact, the 1.8250 mark that we have obtained for the Durbin-Watson test tells us that<br />

there is no dependence among the variables. Also, the Breusch-Pagan test runed under Matlab shows that there is no<br />

problem of heteroskedasticity. Finally, the error vector has an expected value of zero. We complete the Gauss-<br />

Markov theorem by assuming that the database is randomly constituted and by attesting that the model is linear.<br />

However, this model is not consistent due to the fact that the error terms do not follow a normal distribution, which<br />

is not sufficient to invalidate the significance levels and the reliability of the final model. The hypothesis made on<br />

the normality of the CCF explains the inconsistency of the model, which in reality follow a bimodal distribution<br />

with peaks reached at the two extremities (0 and 1).<br />

5 Conclusion<br />

The recent financial crisis reminds us that managing credit portfolios is still more than ever a big issue in modern<br />

finance. Among the different topics we choose here to focus on the EAD which has not retained the interest of many<br />

213


esearches since the implementation of the new Basel capital accord. The other contributions of this paper came<br />

from the quality of the database which was granted by a large Canadian financial institution and also by the fact that<br />

EAD is tested on individual lines of credit, as opposed to corporate loans that were considered in past empirical<br />

studies.<br />

An econometric approach based on the Stepwise application has allowed us to determine that the age, the<br />

exposure limit, the drawn amount, the interest rate, the utilization ratio a month before default, the 3 month<br />

Canadian T-Bill and the S&P/TSE300 index had all an explanatory power on the EAD. These results are in line with<br />

those listed in the literature review, meaning that there are little differences between the EAD of corporate and<br />

individual loans.<br />

Future research can try to formalize the dependence that is hardly suspected between the EAD and the two<br />

others parameters of the expected loss (PD and LGD). Also, given that there are few empirical studies on the EAD,<br />

future research could improve the literature by developing new methodologies, adding different kinds of credit line,<br />

etc. Finally, this paper, we hope, will contribute to enrich the literature on the EAD, thanks to significant results that<br />

were obtained. Improving the understanding of measures like the EAD will help financial institutions improve their<br />

solvency, which contributes to a better financial system.<br />

6 References<br />

Allen, L. & Saunders, A. (2003). A survey of cyclical effects in credit risk measurement models, BIS Working<br />

Papers, 126, Monetary and Economic Department, January.<br />

Araten, M. & Jacobs, M. Jr. (2001). Loan equivalents for revolving credits and advised lines, The RMA Journal,<br />

May, 34-39.<br />

Asarnow, E. & Marker, J. (1995). Historical Performance of the US Corporate Loan Market 1988-1993, Journal of<br />

Commercial Lending, 10(2), Spring, 13-32.<br />

Basel Committee on Banking Supervision (2004). International Convergence of Capital Measurement and Capital<br />

Standards: A Revised Framework, Bank for International Settlements, June.<br />

Gruber, W. & Parchert, R. (2006). Overview of EAD Estimation Concepts. In B. Engelmann & R. Rauhmeier<br />

(Eds.), The Basel II Risk Parameters: Estimation, Validation and Stress Testing (pp. 177-196). Springer Berlin<br />

Heidelberg, Business and Economics.<br />

JACOBS, M. Jr (2008). An empirical study of Exposure at Default, Washington, D.C., Office of the Comptroller of<br />

the currency, OCC Working Paper, June.<br />

Jiminez, G. & Mancia, J. (2009). Modelling the distribution of credit losses with observable and latent factors,<br />

Journal of Empirical Finance, 16(2), March, 235-253.<br />

Jimenez, G., Lopez, J.A. & Saurina, J. (2006). What do one million credit line observations tell us about Exposure<br />

at Default? A study of credit line usage by Spanish firms, Banco De España, June.<br />

Kim, M.-J. (2008) Stress EAD: Experience of 2003 Korea Credit Card Distress, Journal of Economic Research, 13,<br />

73-102.<br />

Moral, G. (2006). EAD estimates for facilities with explicit limits. In B. Engelmann & R. Rauhmeier (Eds.), The<br />

Basel II Risk Parameters: Estimation, Validation and Stress Testing (pp. 197-242). Springer Berlin Heidelberg,<br />

Business and Economics.<br />

Ross, T., Minh, H. & Jarrad, H. (2007). Modeling exposure at default, credit conversion factors and the Basel II<br />

accord, Journal of Credit Risk, 3(2), Summer, 75-84.<br />

Stephanou, C. & Mendoza, J.C. (2005). Credit Risk Measurement Under Basel II: An Overview and Implementation<br />

Issues for Developing Countries, Working Paper 3556, World Bank Policy Research, April.<br />

214


A MACRO-BASED MODEL OF PD AND LGD IN STRESS TESTING LOAN LOSSES<br />

Esa Jokivuolle, Aalto School of Economics, Bank of Finland<br />

Email: esa.jokivuolle@bof.fi<br />

Matti Virén, University of Turku, Bank of Finland<br />

We present a macro variable-based empirical model for corporate bank loans’ credit risk, which captures the well-known positive<br />

relationship between probability of default (PD) and loss given default (LGD; ie, the inverse of recovery) and their counter-cyclical<br />

movement with the business cycle. In the absence of proper micro data on LGD, we use a random-sampling method to estimate the<br />

annual average LGD. We specify a two equation model for PD and LGD which is estimated with Finnish time-series data from 1989-<br />

2008. We also use a system of time-series models for the exogenous macro variables to derive the main macroeconomic shocks which<br />

are then used in stress testing aggregate loan losses. We show that the endogenous LGD makes a considerable difference in stress tests<br />

compared to a constant LGD assumption.<br />

Keywords: stress testing, loan losses, default rate<br />

JEL classification: G28<br />

1. Introduction<br />

Credit risk portfolio models for bank loans may be divided in two categories: 1) models based on portfolio theory,<br />

and 2) empirical models based on macro variables and their interactions. Interest in the second group, the macrobased<br />

models has been growing because regulators are increasingly requiring stress tests in which loan loss<br />

scenarios are linked to a stressed state of the macro economy. In the macro-based models, default rates (which are<br />

then used as estimates of probability of default (PD)) are modeled as functions of macro variables such as the output<br />

gap and interest rates. However, the second central element of credit risk, loss given default (LGD) is typically not<br />

modeled within the system of macro-based equations although evidence from corporate bonds clearly suggests that<br />

PDs and LGD are positively correlated; apparently via the common exposure to the business cycle.<br />

The absence of an “endogenous” LGD in the macro-based models is understandable because of the scarcity of<br />

time-series data on bank loan LGDs. As a result, LGD is often taken as a fixed parameter (see e.g. Sorge and<br />

Virolainen 2006) or its cyclical variation is modeled in a somewhat ad hoc manner. For prudential or stress testing<br />

purposes, the fixed LGD parameter is supposed to be chosen in a conservative manner to represent a “downturn”<br />

LGD. Clearly, if LGD data is available, more accurate estimates of the risk related to LGD and its joint dynamics<br />

with PDs can be provided (see Miu and Ozdemir 2010). First, the positively correlated PD and LGD can produce a<br />

much larger combined effect on credit risk in the case of severe recessions. Second, a non-constant LGD is<br />

apparently needed to explain the observation that although even in “normal times” there are quite a few business<br />

failures, loan losses remain small (see Figure 1 which illustrates the Finnish case). This may be partly explained by<br />

the fact that new companies, which almost by definition are small, dominate the number of failures in good times.<br />

However, the most obvious explanation is the contra-cyclical behavior of the LGD.<br />

In this paper we present a model for the aggregate credit risk of banks’ corporate loans, in which the loans’<br />

(average) probability of default and the (average) loss given default are jointly determined by a group of macro<br />

variables and their joint dynamics. We extend the previous macro-based models of bank loans’ credit risk in two<br />

ways. First, we use a recently suggested estimation method to obtain a time-series for the average LGD across<br />

defaulted corporate loans (see Jokivuolle, Virén and Vähämaa 2010). Second, we incorporate the well-known<br />

common exposure of PD and LGD to the business cycle.<br />

As regards the first extension, the idea in the LGD estimation method suggested by Jokivuolle et al. (2010) is<br />

the following. In the absence of data on actual LGDs, the method provides a way to back out an estimate of the LGD<br />

from auxiliary data which reflect the unobservable LGD. It is first assumed that the LGD is a constant in the cross<br />

section of defaulted companies but can vary from one period to another. The method is based on the simple identity<br />

that in a given period losses from banks’ corporate loans are equal to the outstanding loans of the defaulted<br />

215


companies in that period times the (constant) LGD. We have data on aggregate bank loan losses, the total number of<br />

companies and defaulted companies, and the distribution of debt among all companies. However, as we do not know<br />

which companies actually defaulted in a given period (year), the method uses random sampling to form the<br />

distribution of possible LGDs which satisfy the above mentioned identity. The mean of this distribution (or in<br />

principal any other central statistic, such as the median) can then be used as the final LGD estimate. Overall, we<br />

believe that the method of Jokivuolle et al. (2010) may be helpful in estimating the PD-LGD correlation in cases<br />

where no actual data on LGDs are available.<br />

As to the second extension, we refine the preliminary approach taken in Jokivuolle et al. (2010) to incorporate<br />

the endogenous LGD in a macro-based model of PD which in turn is based on the model of Sorge and Virolainen<br />

(2006). We first find that the output gap, unlike as a factor explaining the PD, does not explain sufficiently well the<br />

time variation in our estimated LGD series. Hence, we replace it by the aggregate “gross profit rate” which more<br />

directly reflects the financial state of the corporate sector. The gross profit rate explains about 60% of the annual<br />

time-series variation in LGD and helps establish in our PD-LGD model a relatively high positive correlation<br />

between the PD and the LGD.<br />

Following Sorge and Virolainen (2006), and using data from Finland, we then simulate the distribution of loan<br />

losses with a two-step procedure. In the first step random shocks from the estimated model are simulated with the<br />

Monte Carlo method in order to produce scenario paths for PD and LGD over a three-year horizon. In the second<br />

step the PD and LGD scenarios are used to simulate cumulative aggregate loan losses for the three-year horizon for<br />

a representative portfolio of corporate loans. We show that the extended model with the “endogenous” LGD<br />

produces considerably higher loss risk estimates than a benchmark model with a constant LGD. Following Miu and<br />

Ozdemir (2010), a constant downturnLGD would have to be 15-33% higher than the unconditional expected LGD in<br />

order to match the unexpected loss estimates obtained from the endogenous LGD model.<br />

The paper is organized as follows. In section 2 we discuss the LGD estimation method introduced in Jokivuolle<br />

et al. (2010) and estimate a time-series for the average LGD in Finland over 1988-2008 which covers two very<br />

pronounced recessions. In section 3 the joint model of PD and LGD, along with estimation results is presented,<br />

using the Finnish data. Section 4 presents simulation results on the loan loss distribution, comparing the joint PD-<br />

LGD model with a benchmark model with a constant LGD. Section 5 concludes.<br />

2. Estimation of the average LGD<br />

In this section we recap and extend the description of the bank loan LGD estimation procedure, suggested in<br />

Jokivuolle et al. (2010). Their method starts by assuming that the LGD is the same for all defaulted corporate loans<br />

in any given year (or any period, depending on data availability), but LGD can vary from year to year. The data<br />

needed to estimate the LGD for a given year include banks' aggregate loan losses from corporate loans, the number<br />

of companies in the beginning of the year and the number of defaulted companies during the year, and the loan size<br />

distribution across companies.<br />

The estimation method proceeds as follows. Consider a population of firms i, i=1,..., Nt, where t is the time index.<br />

Let li.t denote the amount of bank loans that firm i holds in year t, and recall that LGDt is assumed constant across<br />

firms in a given year. We indicate default by Di,t which obtains value 1 if firm i defaults, 0 otherwise. We can now<br />

write total loan losses in year t (TLLt) as follows:<br />

N<br />

t<br />

t<br />

TLLt = � Di,<br />

tli<br />

, t LGDt<br />

� LGDt<br />

�<br />

i�1<br />

N<br />

i�1<br />

In other words, total loan losses in year t simply equal the sum of loans of defaulted firms, multiplied by the<br />

common LGD.<br />

As in the case of data available to us from Finland, we do not know which firms defaulted in a given year, but we do<br />

know the total number of defaulted firms each year, which is denoted by kt. We also know the total loan losses from<br />

banks’ corporate clients (TLLt) and we have information of the bank loan size distribution across firms so that in the<br />

estimation procedure li,ts are taken as given.<br />

216<br />

D<br />

l<br />

i,<br />

t i,<br />

t<br />

(1)


As one can either observe or measure everything else than LGD in equation (1), LGD can be estimated on the<br />

basis of equation (1). This is done by drawing random samples of size kt out of the annual population of Nt firms.<br />

For each random sample of kt defaulted firms with loans li,t, LGDt is solved from equation (1). Denote this by<br />

LGD * t. In addition, the method imposes a feasibility criterion on LGD * t that it has to lie between 0 and 100%. After a<br />

sufficient number of random samples (we used 100 000), which meet the feasibility criterion, LGD * ts are organized<br />

into a frequency distribution. The expected LGD * t of the distribution is then used as the final estimate of the annual<br />

LGDt. To better understand the intuition of the estimation method, note that if one actually knows which firms<br />

defaulted in a given year, then the procedure would simply reduce to using equation (1) to directly solve the LGDt.<br />

Alternatively, if all loans were of equal size (or could be approximated to be so), also then LGDt could be directly<br />

solved from (1). This follows from the fact that, if li,t = lt for each i, then the term � t N<br />

Nt<br />

to lt<br />

� Di,<br />

t � lt<br />

kt<br />

i�1<br />

of loans.<br />

1<br />

D<br />

i t i t l , ,<br />

would simply reduce<br />

regardless of which companies actually defaulted because all companies had the same amount<br />

Figure 2 provides the entire time series of estimated LGDts over the sample period 1988-2008. In 1991, in the<br />

beginning of the Finnish 1990s crisis, LGD reached its peak at 73% in our estimations. The average LGDt over time<br />

in our sample is about 39% which is broadly in line with earlier literature (see e.g. Schuermann, 2004) as well as<br />

with the assumption of 45%, central in the Basel capital adequacy framework.<br />

Schuermann (2004) surveys factors which are systematically related to LGD. He finds the business cycle to be<br />

of primary importance. As seen from Figure 2, our estimated LGD series for Finland also appears to be strongly<br />

related to the business cycle, measured by output gap. In the following section we turn to modeling this relationship.<br />

Secondly, one could try to extend the LGD estimation procedure to industry effects by allowing a separate LGD for<br />

each industry. However, this would extend the estimation procedure to a vector of LGDs rather than a single LGD<br />

and would hence add complexity, along with additional data requirements, to the procedure and might substantially<br />

increase the computational burden of the procedure.<br />

3. Modeling PD and LGD<br />

We define PD as the aggregate corporate default rate and model it by output gap, real interest rates and indebtedness.<br />

While the output gap reflects firms’ general demand conditions and hence drives the cyclical behavior of the default<br />

rate, interest rates and indebtedness affect the sensitivity of firms’ financial health to demand shocks.<br />

Turning to LGD, Figure 2 shows that the annual average LGD, which we have estimated in section 2, varies<br />

much over time and that it has a strong positive relationship with the PD. A natural way to take into account the<br />

cyclical features of LGD would seem to be to model LGD, just like the PD, as a function of the output gap.<br />

However, such an LGD equation fits the Finnish data quite poorly; the R 2 would be only about 20% (see Jokivuolle<br />

et al. 2010). Therefore, we abstain from using the output gap as the main determinant of LGD. Instead we use the<br />

aggregate gross profit rate as the explanatory variable of LGD. This choice can be motivated by the following<br />

arguments. First, the gross profit rate reflects cyclical changes in the economy. Second, it reflects the financial<br />

position of the firms in that higher gross profit rate allows firms to finance their projects with lower leverage (e.g.,<br />

using retained earnings instead) and hence the debt burden is likely to be smaller. The recovery rate from less<br />

indebted firms tends to be higher. Third, a more profitable corporate sector, reflected by a higher gross profit rate,<br />

can better absorb forced asset sales by defaulted companies (cf. the argumentation above based on Schleifer and<br />

Vishny 1992). The model with the gross profit rate regressor turns out to fit the data quite well; as we will see below<br />

the R 2 is typically above 60% in sample periods.<br />

Following Sorge and Virolainen (2006), PD is expressed in a logistic functional form to ensure that in the stress<br />

test simulations PD always lies between zero and one. The same restriction applies to LGD, so we apply the logistic<br />

transformation also to the LGD. Starting with the PD, we define<br />

PD = (1+exp(yPD)) -1 , (2)<br />

where yPD is the macroeconomic index whose parameters must be estimated. A higher value for yPD implies a better<br />

state of the economy with a lower default probability p, and vice versa. yPD is given by the logit transformation<br />

217


L(PD) = ln((1-PD)/PD) = yPD<br />

(3)<br />

Analogous notation applies to LGD so that have the corresponding index yLGD.<br />

The macroeconomic indexes for PD and LGD are determined by the exogenous macroeconomic factors (already<br />

discussed above) in a linear model which in the time domain take the form:<br />

yjt = ∑ βjixit + υjt, (4)<br />

in which j=PD, LGD, �j is the corresponding set of regression coefficients for explanatory variables xi,t, for i = 1...4,<br />

and �j,t is the error term. Equations (2) - (4) form a multifactor model of PD and LGD. The systematic risk<br />

components are captured by the macroeconomic variables xi,t and the idiosyncratic PD and LGD shocks are captured<br />

by the error terms �j,t.<br />

As in Sorge and Virolainen (2006), individual macroeconomic variables are modelled as a system of univariate<br />

AR(2) processes:<br />

xit = αi0 + αi1xi,t-1 + αi2xi,t-2 + εit<br />

The system of equations (2) - (5) govern the joint evolution of the PD, the LGD and the associated macroeconomic<br />

factors with a 6 � 1 vector of error terms, E, and a 6 � 6 variance-covariance matrix of errors, �:<br />

��<br />

�<br />

� ��<br />

��<br />

, � �<br />

E � �� �� ~ N(<br />

0,<br />

�)<br />

, � � � � .<br />

��<br />

�<br />

���<br />

, � ��<br />

�<br />

The PD-LGD system with AR(2) processes for the independent macro variables is estimated with Finnish<br />

quarterly data which cover the period 1989Q1-2008Q4. The results make sense both in terms of signs and<br />

magnitudes of coefficients and overall predictive power. Note that because of the logistic transformation, the signs<br />

of coefficients both in the PD and the LGD equation must be reversed to arrive at the correct economic<br />

interpretation. Hence, for example, after the positive sign of the output gap coefficient in the PD equation is<br />

switched to negative, we obtain the expected relationship that PD decreases as the output gap increases.<br />

The system was estimated with the SUR estimator to take into account the correlation between all residuals,<br />

both of the PD and LGD equation and the AR(2) equations. Residual correlation is the channel via which the model<br />

captures the cyclical co-movement of PD and LGD because in our model specification the PD and LGD equations<br />

do not share common observable factors. In the data, the correlation between the PD and LGD series is as high as<br />

ca. 90%. The PD-LGD correlation is captured via (1) correlation between residuals of the LGD and the PD equation<br />

and via (2) correlation between the gross profit rate (which is the sole exogenous variable in the LGD equation) and<br />

the exogenous variables of the PD equation (ie, output gap, interest rate and indebtedness). Of primary interest is the<br />

correlation between output gap and gross profit rate because these present the cyclical factors in the PD and LGD<br />

equation, respectively. Their correlation is ca. 24% which is not very high. By contrast, the correlation between<br />

gross profit rate and the two other exogenous variables of the PD equation, interest rate and indebtedness, are quite<br />

high; -0.87 in the case of interest rate and -0.95 in the case of indebtedness. These correlations suggest that debtrelated<br />

variables and debt capacity (reflected in gross profit rate) are significant drivers of the PD-LGD comovement.<br />

Finally, the correlation between the residuals of the PD and the LGD equations is 0.57. This indicates<br />

that a large share of the PD-LGD correlation observed in the data cannot be attributed to the set of the observable<br />

exogenous variables present in the model. PD and LGD simulations with the model in section 4 were able to<br />

generate a PD-LGD correlation of 0.44 which falls short of the ca. 0.90 PD-LGD correlation in the data.<br />

Nonetheless, we conclude that the model captures the PD-LGD cyclical co-movement, if not its full quantitative<br />

scale, and is hence useful in considering its implications for credit risk. These implications are the subject of the next<br />

section.<br />

4. Simulation results<br />

Given the estimated model for PD and LGD and following Sorge and Virolainen (2006) we simulate the banking<br />

sector loan losses from corporate loans over a three-year horizon. This is done with Monte Carlo simulations where,<br />

in the first place, we generate a set of values, say, 10 000 random scenario paths, for PD and LGD. We use a sample<br />

of 3000 Finnish firms and their bank loans, which is available to us only for year 2002. In doing so, we assign the<br />

218<br />

(5)


aggregate PD as the probability of default for each company in the sample portfolio. Binomial random draws<br />

according to the common PD are used to determine defaulted firms in each simulation round. Given the simulated<br />

value of LGD for each period, which is similarly assumed to be the same for all defaulted companies, we obtain the<br />

loan loss for each individual defaulted company. Individual losses are then aggregated to portfolio level loan losses.<br />

From the resulting 10 000 aggregate loan loss scenarios we can then compute the expected value of loan losses plus<br />

various distributional measures like 99% and 99.9% value-at-risk figures of the loan loss distribution (see Figure 3<br />

for an example of our simulated loan loss distributions). The “raw” value-at-risk figures are further transformed into<br />

“unexpected loss” figures by subtracting from them the expected loss.<br />

Our key result is to compare the unexpected loan losses between the case of (1) endogenous LGD, based on our<br />

PD-LGD model and the case of (2) “exogenous” LGD where the LGD equation is “shut down” and the LGD in<br />

simulating loan losses is fixed at a constant which is equal to the expected LGD obtained from the PD-LGD model.<br />

Table 1 shows that the 99% unexpected loss increases by 19% and the 99.9% unexpected loss by 30% when the<br />

endogenous LGD is used instead of the exogenously given constant LGD. Figure 3 illustrate the higher skewness of<br />

the loan loss distribution based on the endogenous LGD model. These results clearly show that taking into account<br />

the systematic “risk” in the LGD by endogenizing it, makes a sizable difference in credit portfolio risk estimates.<br />

Miu and Ozdemir (2010) consider the following interesting question. How large a constant “downturn” LGD<br />

estimate should be that it compensates for the fact that LGD varies in time and that PD and LGD are correlated? We<br />

can use our PD-LGD model to ask the same question. How big should the constant LGD parameter be so that the<br />

exogenous LGD model would produce the same unexpected loss (either at 99% or 99.9% level) as the endogenous<br />

LGD model? We find that approximately a 45% constant LGD (instead of the 39% expected LGD used in Table 1<br />

baseline case) is required to match the 99% unexpected loss level of the endogenous LGD model. Similarly, a 52%<br />

constant LGD is required to match the 99.9% unexpected loss of the endogenous LGD model. Adjustments of this<br />

size are somewhat lower than in the model by Miu and Ozdemir (2010).<br />

So far we have used the model for simulating the loan loss distribution by using sample averages as starting<br />

values for the exogenous macro variables. As in Sorge and Virolainen (2006), the framework also offers the<br />

possibility condition simulations on different macro shock scenarios. For example, one may consider a scenario in<br />

which everything goes “wrong” as it often does before large financial disasters. Typically, output falls, indebtedness<br />

increases (perhaps as a consequence of a collapse of the exchange rate regime) and interest rates increase as a result<br />

of loss of policy credibility (or, as a result of defending the exchange rate regime). An example of this kind of rather<br />

extreme events is the Finnish depression in the early 1990s (see Conesa et al. 2007). However, the recent global<br />

financial crisis of 2008-2010 showed up in Finland quite differently from the 1991-1993 depression. Although<br />

output fell by almost the same amount, this time there were no major debt or interest rate shocks. The gross profit<br />

rate fell again but now its initial level was considerably higher. As a result, loan losses remained rather limited,<br />

given the drop in economic activity (see Figure 1).<br />

In Table 1, we consider the effects of different shocks by introducing a permanent one standard deviation shock<br />

to the exogenous macro variables for the three-year horizon. Conditional on such initial shocks, the stochastic<br />

simulation is carried out as above. We focus on the conditional expected loss, given the chosen initial shocks. The<br />

shocks are introduced either one by one or all simultaneously (see row “All shocks” in Table 1). In addition, we<br />

include a simultaneous shock scenario which roughly captures the macro state that prevailed in Finland in 1991-<br />

1993.The expected loan losses as a share of the aggregate loan stock, conditional on such a macro scenario, are 6.5%<br />

cumulatively over the three-year period. This model based scenario falls short of reality; the roughly 10% loan loss<br />

rate in Finland over 1991-1993. However, the fit is much closer than what we would obtain with the constant,<br />

exogenous LGD model. An even higher fit is obtained by introducing a very strong “sudden stop” shock to the<br />

system. If a one-period shock which is of the same magnitude as the sum of the 12 periodical shocks is introduced in<br />

the first period, loan losses exceed the above mentioned 10% level (11.5%). This reflects the persistence of shocks in<br />

the system: a sudden stop shock has a larger impact than a series of smaller shocks that amount to the same total<br />

magnitude.<br />

In addition, the model allows us also to consider idiosyncratic shocks to LGD. As already discussed in section<br />

3.1, these could result from, say, a sudden change in asset prices (not explicitly modeled in the current framework)<br />

due to a change in expectations. The results of such a shock are also provided in Table 1; again we consider a shock<br />

that corresponds to one standard deviation of the error term of the LGD equation. Such an idiosyncratic LGD shock<br />

219


produces a similar set of responses as the different macro shocks in Table 1. It is interesting to ask what are the<br />

relative contributions of the different types of shocks; how much can be attributed to shocks in exogenous variables<br />

and how much to shocks in the respective variable itself. To further investigate this question we estimated a simple<br />

VAR model for all variables. The variance decomposition of the VAR showed that the share of idiosyncratic LGD<br />

shocks is about 52% while the corresponding number for the PD would be “only” 36%. This additional analysis<br />

suggests that we should not only focus on the exogenous variables as potential sources of banks’ loan losses but<br />

should also be aware of the potential influence of institutional, structural and expectations-related shocks, the<br />

potential magnitude of which in the current framework is captured by the idiosyncratic LGD error term.<br />

5. Conclusions<br />

In the light of empirical data from Finland as well as our simulations of unexpected bank loan losses it is quite<br />

obvious that an assumption of a constant (exogenous) LGD is not correct. Simulations show that credit risks<br />

assessed with the help of the endogenous LGD model are considerably higher than those obtained from the<br />

benchmark exogenous LGD version of the model. Another way to illustrate this is that a constant “downturn” LGD<br />

would have to be 15 to 33% higher than the time-series average LGD in order to match the unexpected loss levels<br />

obtained with the endogenous LGD model (cf. Miu and Ozdemir 2010).<br />

The aim of endogenizing the LGD has incurred a certain cost in that we have had to abandon the industryspecific<br />

PDs of the original Sorge and Virolainen (2006) model and replace them with the aggregate PD of the<br />

corporate sector. A natural extension would be to try to bring back the industry-specific structure, should additional<br />

data requirements be met in order to obtain industry-specific LGDs. This could perhaps be achieved with a<br />

somewhat condensed industry categorization structure. Also the introduction of asset prices to the operational<br />

structure of the model could be considered.<br />

6. Acknowledgements<br />

We would like to thank Kimmo Virolainen who kindly provided the original computer code, Mauri Kotamäki for<br />

excellent research assistance, and Anni-Mari Karvinen and Liisa Väisänen for valuable help with the LGD<br />

estimation. The usual disclaimer applies.<br />

7. References<br />

Basel Committee on Banking Supervision (2006) Basel II: International Convergence of Capital Measurement and<br />

Capital Standards: A Revised Framework - Comprehensive Version, June 2006.<br />

Conesa, J., Kehoe, T. and Ruhl, K. (2007) Modeling great depressions: the depression in Finland in the 1990s. In by<br />

T. Kehoe and E.Prescott (Editors) Great Depressions of the Twentieth Century (Paperback), Federal Reserve<br />

Bank of Minneapolis.<br />

Jokivuolle, E., M. Virén, O. Vähämaa (2010) Transmission of macro shocks to loan losses in a deep crisis: The case<br />

of Finland in Model Risk in Financial Crises - Challenges and Solutions for Financial Risk Models, Rösch, D.<br />

and Sceule, H. (eds), Risk Books, London, 2010.<br />

Miu, P. and Ozdemir, B. (2010) Basel requirement of downturn LGD: modeling and estimating PD & LGD<br />

correlations. Journal of Credit Risk, forthcoming. Available at SSRN: http://ssrn.com/abstract=907047<br />

Schleifer, A., R. Vishny (1992) Liquidation values and debt capacity: a market equilibrium approach. Journal of<br />

Finance 47, 4: 1343–66.<br />

Sorge, M., Virolainen, K. (2006) A comparative analysis of macro stress-testing methodologies with application to<br />

Finland. Journal of Financial Stability 2(2), 113–151.<br />

220


Simulation PD LGD Expected loss rate<br />

Base: endogenous LGD 0.0044 0.39 0.021<br />

GDP shock 0.0057 0.39 0.027<br />

Interest rate shock 0.0058 0.39 0.027<br />

Debt shock 0.0052 0.39 0.025<br />

Gross profit rate shock 0.0044 0.53 0.028<br />

LGD shock 0.0044 0.45 0.024<br />

All shocks (excl. LGD shock) 0.0085 0.52 0.054<br />

Finnish crisis 1991-93<br />

0.0087<br />

0.60<br />

0.065<br />

Base: exogenous LGD<br />

0.0044<br />

0.39<br />

0.020<br />

All shocks (excl. LGD shock) 0.0086 0.39 0.039<br />

In all cases, a shock is defined as one standard deviation of the error term in the respective AR(2) models, or of the error term of the LGD<br />

equation in the case of the LGD shock.<br />

Table 1 Expected PD and LGD and expected loss in the conditional and unconditional simulations<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

Bankruptcy rate p.a.<br />

loan losses in 2009 prices<br />

86 88 90 92 94 96 98 00 02 04 06 08<br />

Figure 1 Aggregate corporate default rate (PD) and loan loss rate in Finland<br />

.08<br />

.06<br />

.04<br />

.02<br />

.00<br />

-.02<br />

-.04<br />

-.06<br />

gap<br />

LGD<br />

88 90 92 94 96 98 00 02 04 06 08<br />

Figure 2 Estimated quarterly average LGD (right scale) and output gap (left scale) in Finland<br />

0<br />

0 1 2 3 4 5 6 7 8 9<br />

Loss in % of total credit exposure<br />

The figure illustrates the unconditional simulation with endogenous LGD.<br />

Frequency (%)<br />

2.5<br />

2<br />

1.5<br />

1<br />

0.5<br />

Figure 3 Example of the simulated loan loss distributions<br />

221<br />

.8<br />

.7<br />

.6<br />

.5<br />

.4<br />

.3<br />

.2<br />

.1


XTREME CREDIT RISK MODELS: IMPLICATIONS FOR BANK CAPITAL BUFFERS<br />

David E. Allen, Edith Cowan University<br />

Akhmad R. Kramadibrata, Edith Cowan University<br />

Robert J. Powell, Edith Cowan University<br />

Abhay K. Singh, Edith Cowan University<br />

Abstract. The Global Financial Crisis (GFC) highlighted the importance of measuring and understanding extreme credit risk. This<br />

paper applies Conditional Value at Risk (CVaR) techniques, traditionally used in the insurance industry to measure risk beyond a<br />

predetermined threshold, to four credit models. For each of the models we use both Historical and Monte Carlo Simulation<br />

methodology to create CVaR measurements. The four extreme models are derived from modifications to the Merton structural model<br />

(which we term Xtreme-S), the CreditMetrics Transition model (Xtreme-T), Quantile regression (Xtreme-Q), and the author’s own<br />

unique iTransition model (Xtreme-i) which incorporates industry factors into transition matrices. For all models, CVaR is found to be<br />

significantly higher than VaR, and there are also found to be significant differences between the models in terms of correlation with<br />

actual bank losses and CDS spreads. The paper also shows how extreme measures can be used by banks to determine capital buffer<br />

requirements.<br />

Keywords: credit risk, conditional value at risk, conditional probability of default, historical simulation, Monte Carlo simulation.<br />

JEL Classification: G01, G21, G28<br />

Acknowledgements: Allen and Powell thank the Australian Research Council for funding support.<br />

1. Introduction<br />

The Global Financial Crisis (GFC) has raised widespread spread concern about the ability of banks to accurately<br />

measure and provide for credit risk during extreme downturns. Prevailing widely used credit models were generally<br />

designed to predict credit risk on the basis of ‘average’ credit risks, or in the case of Value at Risk (VaR) models on<br />

the basis of risks falling below a pre-determined threshold at a selected level of confidence, such as 95 percent or 99<br />

percent. The problem with these models is that they are not designed to measure the most extreme losses. It is<br />

precisely during these extreme circumstances when firms are most likely to fail, and it is exactly these situations that<br />

the models in this study are designed to capture. Although the use of VaR is widespread, particularly since it’s<br />

adaptation as a primary market risk measure in the Basel Accords, it is not without criticism. Critics include<br />

Standard and Poor’s analysts (Samanta, Azarchs, and Hill, 2005) due to inconsistency of VaR application across<br />

institutions and lack of tail risk assessment. VaR has also been criticised by Artzner, Delbaen, Eber and Heath<br />

(1999; 1997) as it does not satisfy mathematical properties such as subadditivity. Conditional Value at Risk (CVaR)<br />

on the other hand, measures extreme returns (those beyond VaR). The metric has been shown by Pflug (2000) to be<br />

a coherent risk measure without the undesirable properties exhibited by VaR. CVaR has been applied to portfolio<br />

optimization problems by Uryasev and Rockafellar (2000), Rockafeller and Uryasev (2002), Andersson et.al (2000),<br />

Alexander et al (2003), Alexander and Baptista (2003), Rockafellar et al (2006), Birbil, Frenk, Kaynar, & Noyan<br />

(2009) and Menoncin (2009). CVaR has also been explored as a measure of sectoral market and credit risk by Allen<br />

and Powell (2009a, 2009b), but compared to VaR, CVaR studies in a credit context are still in their infancy.<br />

Given the importance of understanding and measuring extreme credit risk, the aims of this study include<br />

demonstrating how CVaR techniques can be applied to prevailing models to measure tail risk, to investigates to<br />

what extent these CVaR measures are significantly differently from VaR measures, to show how asset value<br />

fluctuations can impact on bank capital buffer requirements, and to ascertain which of the models most highly<br />

correlate with actual measures of credit risk (including Credit Default Swap spreads, delinquent loans and chargeoffs).<br />

To ensure a thorough examination of CVaR metrics we use a range of models (four in total as explained in<br />

Section 2), as well as apply two techniques (Historical and Monte Carlo Simulation) to each model.<br />

222


2. Data and Methodology<br />

2.1. Data<br />

Data is divided into Pre-GFC (January 2000 to December 2006, this 7 year timeframe aligns with Basel Accord<br />

advanced model credit risk requirements) and GFC (2007 to 2009). We also generate an annual measure for each<br />

model for each of the 10 years in the dataset. For our Merton / KMV based models (Xtreme-S and Extreme Q)<br />

which require equity prices, we obtain daily prices from Datastream, together with required balance sheet data,<br />

including asset and debt values. To ensure a mix of investment and speculative entities we use entities from the S&P<br />

500 as well as entities from Moody’s Speculative Grade Liquidity Ratings list (Moody's Investor Services, 2010a).<br />

In both cases we only include rated entities, for which equity prices and Worldscope balance sheet data are available<br />

in Datastream. Entities with less than 12 months data in either of the 2 periods are excluded. This results in 378<br />

entities consisting of 208 S&P 500 companies and 170 speculative companies. Credit ratings required for the<br />

transition based models are obtained from Moody’s (Moody's Investor Services, 2010b). We use Standard and<br />

Poor’s (2009) US transition probability matrices for each year in our study. For the Pre-GFC vs GFC periods, we<br />

average the matrices for the relevant years in the dataset. Annual delinquent loans and charge-off rates were<br />

obtained from the U.S. Federal Reserve Bank (2010). Annual CDS figures for US Corporates were obtained from<br />

Datastream. These CDS figures were extracted by credit rating, and weighted according to the dollar value of debt<br />

for each credit rating category in our data sample.<br />

2.2. Methodology Model 1: Xtreme-S<br />

We use the Merton / KMV approach to estimating default, and then modify this calculation to incorporate a CVaR<br />

component (which we term CPD as the model uses probability of default as opposed to VaR). The Merton/KMV<br />

model is well documented (for example, Crosbie & Bohn, 2003). In summary the point of default is where the firm’s<br />

debt exceeds asset values. Distance to default (DD) and probability of default (PD) are measured as<br />

2<br />

ln( V / F)<br />

� ( � � 0.<br />

5�<br />

V ) T<br />

DD �<br />

� T<br />

PD � N(<br />

�DD)<br />

(2)<br />

where V = market value of firm’s assets, F = face value of firm’s debt (in line with KMV, this is defined as<br />

current liabilities plus one half of long term debt), µ = an estimate of the annual return (drift) of the firm’s assets, N<br />

= cumulative standard normal distribution function.<br />

We define conditional distance to default (CDD) as being DD on the condition that standard deviation of asset<br />

returns exceeds standard deviation at the 95 percent confidence level, i.e. the worst 5 percent of asset returns. We<br />

term the standard deviation of the worst 5 percent of returns as CStdev, which we substitute into equation 1 to obtain<br />

a conditional DD:<br />

2<br />

ln( V / F)<br />

� ( � � 0.<br />

5�<br />

V<br />

) T<br />

CDD � (3)<br />

CStdev<br />

V<br />

T<br />

Our Monte Carlo approach generates 20,000 simulated asset returns for every company in our dataset. We<br />

generate 20,000 random numbers based on the standard deviation and mean obtained using the Historical approach.<br />

We then follow the same approach as for the Historical model, applying the standard deviation of all simulated<br />

returns to equation 1 to measure DD and the standard deviation of the worst 5 percent of simulated returns to<br />

equation 3 to measure CDD.<br />

2.3. Methodology Model 2: Xtreme-Q<br />

Quantile regression per Koenker & Basset (1978) and Koenker and Hallock (2001) is a technique for dividing a<br />

dataset into parts. Minimising the sum of symmetrically weighted absolute residuals yields the median where 50<br />

223<br />

V<br />

(1)


percent of observations fall either side. Similarly, other quantile functions are yielded by minimising the sum of<br />

asymmetrically weighted residuals, where the weights are functions of the quantile in question per equation 3. This<br />

makes quantile regression robust to the presence of outliers.<br />

where pτ(.) is the absolute value function, providing the τth sample quantile with its solution.<br />

We measure Betas for fluctuating assets (as measured by the Merton Model) across time and across quantiles,<br />

and the corresponding impact of these quantile measurements on DD. Our y axis depicts the asset returns for the<br />

quantile being measured (we measure the 50 percent quantile which corresponds roughly to the standard Merton<br />

model, and the 95 percent quantile to give us our CStdev). The x axis represents the returns for all the asset returns<br />

(all quantiles) in the dataset. The Historical approach is based on the actual historical asset fluctuations. The Monte<br />

Carlo approach uses 20,000 simulated asset returns generated in the same manner as for Xtreme-S.<br />

2.4. Methodology Model 3: Xtreme-T<br />

We base this on CreditMertics transition methodololgy (Gupton, Finger, & Bhatia, 1997). In summary, this model is<br />

based upon obtaining the probability (ρ) of a bank customer transitioning from one grade to another as shown for the<br />

following BBB example:<br />

BBB ρAAA ρAA ρA ρBBB ρBB ρB ρCCC/C ρD<br />

External raters such as Moody’s and Standard & Poor’s (S&P) provide transition probabilities for each grading<br />

and we use the S&P US transition probabilities. .Based on the probability table, Historical VaR is obtained by<br />

calculating the probability weighted portfolio variance and standard deviation (σ), and then calculating Historical<br />

VaR using a normal distribution (for example 1.645σ for a 95 percent confidence level). We extend this VaR<br />

methodology (Gupton et al., 1997) to calculate Historical CVaR by using the lowest 5 percent of ratings for each<br />

industry.<br />

CreditMetrics (see also Allen & Powell, 2009b) use Monte Carlo modelling as an alternate approach to<br />

estimating VaR, and we follow this approach for our Monte Carlo CVaR. Transition probabilities and a normal<br />

distribution assumption are used to calculate asset thresholds (Z) for each rating category as follows:<br />

(3)<br />

Pr(Default)= Φ(ZDef/σ)<br />

Pr(CCC) = Φ(ZCCC/σ) - Φ(ZDef/σ) (5)<br />

and so on, where Φ denotes the cumulative normal distribution, and<br />

ZDef = Φ -1 σ (6)<br />

Scenarios of asset returns are generated using a normal distribution assumption. These returns are mapped to<br />

ratings using the asset thresholds, with a return falling between thresholds corresponding to the rating above it. In<br />

line with this methodology we generate 20,000 returns for each firm from which portfolio distribution and VaR are<br />

calculated. We extend this methodology to calculate Monte Carlo CVaR by obtaining the worst 5 percent of the<br />

20,000 returns.<br />

2.5. Methodology Model 4: Xtreme-i<br />

CreditPortfolioView (Wilson, 1998) adjusts transition probabilities based on industry and country factors calculated<br />

from macroeconomic variables, recognising that customers of equal credit rating may transition differently<br />

depending on their industry risk. However, a study by APRA (1999) showed that banks did not favour using<br />

macroeconomic factors in their modelling due to complexities involved. Our own iTransition model (Allen &<br />

Powell, 2009b) uses the same framework as CreditPortfolioView, but (on the basis that differences in industry risk<br />

will be captured in share prices), incorporates market VaR (fluctuations in the share prices of industries) instead of<br />

macroeconomic variables to derive industry adjustments. This is done by calculating market VaR for each industry,<br />

then calculating the relationship between market VaR and credit risk for each industry, using the Merton model to<br />

224


calculate the credit risk component. We classify data into sectors using Global Industry Codes (GICS), which are<br />

Energy, Materials, Industrials, Consumer Discretionary, Consumer Staples, Financials, Health Care, Retail,<br />

Information Technologies, Telecommunications and Utilities. These factors are used to adjust the Xtreme-T model<br />

as follows using a BBB rated loan example:<br />

BBB ρAAAi ρAAi ρAi ρBBBi ρBBi ρBi ρCCC/Ci ρDi<br />

Other than the industry adjustments, our Xtreme-i Historical and Monte Carlo VaR and CVaR calculations<br />

follow the same process as Xtreme-T.<br />

3. Results and implications for capital<br />

Historical<br />

Model Metric Pre GFC GFC<br />

Xtreme-S DD 8.64 4.07<br />

CDD 2.53 1.26<br />

Xtreme-Q DD 8.10 4.04<br />

CDD 2.31 1.96<br />

Xtreme-T VaR 0.0190 0.0453<br />

CVaR 0.0433 0.0908<br />

Xtreme-i VaR 0.0182 0.0570<br />

CVaR 0.0453 0.1052<br />

Monte Carlo<br />

Model Metric Pre GFC GFC<br />

Xtreme-S DD 8.63 4.06<br />

CDD 3.03 1.08<br />

Xtreme-Q DD 8.09 3.81<br />

CDD 2.81 1.84<br />

Xtreme-T VaR 0.0162 0.0507<br />

CVaR 0.0527 0.0865<br />

Xtreme-i VaR 0.0175 0.0543<br />

CVaR 0.0524 0.0859<br />

Table 1. Results Summary. DD (measured by number of standard deviations) is calculated using equation 1. CDD is based on the worst 5 percent<br />

of asset returns using equation 3. VaR (95 percent confidence level) and CVaR (average of losses beyond VaR) are daily figures and can be<br />

annualised by multiplying by the square root of 250, being the approximate number of annual trading days. The pre-GFC period is the 7 years<br />

from 2000 – 2006 whereas the GFC period is the 3 years from 2007 – 2009.<br />

Table 1 shows large differences between VaR and CVaR, or DD and CDD. For example, all Historical models<br />

show CDD being approximately 3 times higher than CVaR during the pre-GFC period, increasing to approximately<br />

5 times higher for the transition based models (Xtreme-T and Xtreme-i) over the GFC period. The Monte Carlo<br />

models show similar trends to their corresponding Historical models, although Historical VaR and Monte Carlo VaR<br />

are slightly closer than Historical CVaR and Monte Carlo CVaR. The reason VaR is closer is because there are a<br />

large number of Historical VaR observations (95 percent of historical observations) to compare to the extremely<br />

large number of Monte Carlo VaR observations, whereas the Historical model generates only a small number of<br />

CVaR observations (5 percent of historical observations) compared to the large number of Monte Carlo CVaR<br />

observations (5 percent of 20,000 observations), Although there are differences between the models in the extent of<br />

the variation between the quantiles, the difference between VaR and CVaR (or DD and CDD) is nonetheless<br />

significant for all models at the 99 percent level using F tests for changes in volatility. Implications for banks are<br />

225


that provisions and capital calculated on below threshold measurements will clearly not be adequate during periods<br />

of extreme downturn as illustrated in Figure 1.<br />

Q u a n t ile A s s e t V a lu e F lu c t u a t io n s<br />

DD 1.84 β 3.26<br />

σ 0.0258<br />

DD 2.81 β 2.13<br />

σ 0.0168<br />

DD 3.81 β 1.57<br />

σ 0.0124<br />

All Asset Value Fluctuations (10 years)<br />

DD 5.98 β 1.0<br />

σ 0.00789<br />

DD 8.09 β 0.74<br />

σ 0.00584<br />

Figure 1. Illustration of Fluctuating Risk. The figure shows the results of the Quantile Regression (Xtreme-Q) Model for the 50 percent and 95<br />

percent quantiles for pre- GFC and GFC periods. The pre-GFC period is the 7 years from 2000 – 2006 whereas the GFC period is the 3 years<br />

between 2007 – 2009. The y axis is calculated on the asset fluctuations (σ), using the Merton model, for the quantile in question. The x axis is the<br />

median σ for the entire 10 year period. Thus the Beta( β) for the 50 percent Quantile for the 10 year period is one. Where σ for a particular<br />

quantile is less (greater) than the median for the 10 year period, β)1, and DD increases (reduces) accordingly.<br />

As asset value σ is the denominator of the DD equation (equation 1), as σ increases (reduces) from one level to<br />

another (i.e from σ1 to , σ2) DD reduces (increases) by the same proportion. Thus the numerator of the equation (a<br />

measure of capital – the distance between as assets and liabilities) needs to increase to restore DD back to the same<br />

level (i.e., as per the BOE observation in Section 1, capital (K) will need to increase by the same proportion to<br />

restore market confidence in the banks’ capital). Thus the required change in capital (K*) is;<br />

K* = K x σ2 / σ1 (7)<br />

Based on Figure 1, during the extreme fluctuations of the GFC (as measured by the 95 percent quantile) US<br />

banks needed 3 times more capital than during ‘median’ circumstances (as measured by the 10 year median). Whilst<br />

we have used the Xtreme-Q model to illustrate this, the same principle applies to all the models - a trebling of VaR<br />

or DD requires treble capital (100 percent buffer) to deal with it.<br />

When comparing different models, it is important to consider how well their relative outcomes compare to<br />

actual credit risk volatility. Using 10 years of annual data, we correlate our measures for the four models to three<br />

measures of actual credit risk. The first of these measures is Credit Default Swap (CDS) spreads, a measure of the<br />

premium the market is prepared to pay for increased credit risk. The second and third are Delinquent Loans and<br />

Charge-off rates as reported by the US Federal Reserve. These correlations are reported in Table 2. Various lags<br />

were tested, with most correlations being most significant with no lag and some correlations (the shaded areas of<br />

Table 2) being most significant with a 1 year lag (e.g. a 2009 measurement for actual risk compared to a 2008<br />

measurement model). To avoid over-reporting of figures, we show only the results of the Histotrical model (the<br />

Monte Carlo models produce very similar outcomes). The structural based models (Xtreme-S and Xtreme Q) show a<br />

higher correlation with CDS spreads than the other models. This is because CDS spreads change daily with market<br />

226


conditions, as does the asset value component of the structural model. The transition based models (Xtreme-T and<br />

Xtreme-i) which depend on ratings (more sluggish than CDS spreads as ratings are often updated only annually)<br />

show no significant correlation in the same year, but a higher correlation when using a one year lag.<br />

Model Metric CDS Spreads Delinquent Loans Charge-off rates<br />

Xtreme-S DD 0.915 ** 0.786 ** 0.606<br />

CDD 0.906 ** 0.826 ** 0.647 *<br />

Xtreme-Q DD 0.914 ** 0.789 ** 0.608<br />

CDD 0.885 ** 0.865 ** 0.683 *<br />

Xtreme-T VaR 0.573 0.929 ** 0.936 **<br />

CVaR 0.741 * 0.893 ** 0.908 **<br />

Xtreme-i VaR 0.831 ** 0.952 ** 0.926 **<br />

CVaR 0.768 ** 0.920 ** 0.920 **<br />

Table 2. The table correlates the Historical model metrics produced by each of our four models for each of the ten years in our data sample with<br />

three measures of actual credit risk of US banks, being CDS Spreads, Delinquent Loans, and Charge-off rates. Level of significance is measured<br />

by a t-test, with * denoting 95 percent significance and ** denoting 99 percent significance. Non shaded areas are where highest correlation is<br />

experienced with no lag, and the shaded areas with a 1 year lag.<br />

All four models show highly significant (99 percent confidence) correlation with delinquent loans, meaning that<br />

the metrics of all the models are a good indicator of actual defaults. There is very high significance shown by the<br />

transition based models’ correlations with charge-off rates. The timeline in Figure 3 shows how both CDS spreads<br />

and the structural model respond quickly to market events, resulting in high correlation between these two items,<br />

whereas ratings (and thus transition models) react slower to market events and thus have a higher correlation with<br />

actual write-offs which usually occur sometime after initial market deterioration.<br />

Of note is that there is very little difference in the correlation significance levels for VaR (DD) as compared to<br />

CVaR (CDD). This means that, although CVaR (CDD) are at much higher levels than VaR as previously discussed,<br />

the trend (percentage increase or decrease from year to year) is similar for both VaR (DD) and CVaR (CDD) .<br />

4. Conclusions<br />

Figure 2. Timeline and Correlations<br />

This paper has shown how CVaR type metrics can be applied to credit models to measure extreme risk. All four<br />

models show highly significant differences between VaR (DD) and CVaR (CDD) measures. Increased volatility<br />

227


equires capital buffers to deal with the increased risk and the paper demonstrated how this buffer requirement can<br />

be measured. Significant differences were observed between the structural based models (Xtreme-S and Extreme-<br />

Q) as compared to the transition based models (Xtreme-T and Xtreme-i). The changes in risk measured by the<br />

structural models are more consistent with changes experienced in CDS Spreads than those shown by the transition<br />

based models, because both structural models and CDS spreads respond rapidly to market conditions. The opposite<br />

is true of charge-offs where the transition based models show much greater correlation than the structural based<br />

models, as there is generally a delay between defaults and charge-offs, and credit ratings also often respond slower<br />

(often annually) to market conditions than structural models. All models show significant correlation with<br />

delinquent loans.<br />

5. References<br />

Alexander, G. J., & Baptista, A. M. (2003). CVaR as a Measure of Risk: Implications for Portfolio Selection:<br />

Working Paper, School of Management, University of Minnesota.<br />

Alexander, S., Coleman, T. F., & Li, Y. (2003). Derivative Portfolio Hedging Based on CVaR. In G. Szego (Ed.),<br />

New Risk Measures in Investment and Regulation: Wiley.<br />

Allen, D. E., & Powell, R. (2009a). Structural Credit Modelling and its Relationship to Market Value at Risk: An<br />

Australian Sectoral Perspective. In G. N. Gregoriou (Ed.), The VaR Implementation Handbook (pp. 403-414).<br />

New York: McGraw Hill.<br />

Allen, D. E., & Powell, R. (2009b). Transitional Credit Modelling and its Relationship to Market at Value at Risk:<br />

An Australian Sectoral Perspective. Accounting and Finance, 49(3), 425-444.<br />

Andersson, F., Uryasev, S., Mausser, H., & Rosen, D. (2000). Credit Risk Optimization with Conditional Value-at<br />

Risk Criterion. Mathematical Programming, 89(2), 273-291.<br />

Artzner, P., Delbaen, F., Eber, J., & Heath, D. (1999). Coherent Measures of Risk. Mathematical Finance, 9, 203-<br />

228.<br />

Artzner, P., Delbaen, F., Eber, J. M., & Heath, D. (1997). Thinking Coherently. Risk, 10, 68-71.<br />

Australian Prudential Regulation Authority. (1999). Submission to the Basel Committee on Banking Supervision -<br />

Credit Risk Modelling: Current Practices and Applications.<br />

Birbil, S., Frenk, H., Kaynar, B., & Noyan, N. (2009). Risk Measures and Their Applications in Asset Management.<br />

In G. Gregoriou (Ed.), The VaR Implementation Handbook. New York: McGraw-Hill.<br />

Crosbie, P., & Bohn, J. (2003). Modelling Default Risk: Moody's KMV Company.<br />

Federal Reserve Bank. (2010). Federal Reserve Statistical Release: Charge-off and Delinquency Rates.<br />

Gupton, G. M., Finger, C. C., & Bhatia, M. (1997). CreditMetrics - Technical Document. New York: J.P. Morgan &<br />

Co. Incorporated.<br />

Koenker, R., & Bassett, G., Jr. (1978). Regression Quantiles. Econometrica, 46(1), 33-50.<br />

Koenker, R., & Hallock, K. (2001). Quantile Regression. Journal of Economic Perspectives, 15(4), 143 - 156.<br />

Menoncin, F. (2009). Using CVaR to Optimize and Hedge Portfolios. In G. N. Gregoriou (Ed.), The VaR Modeling<br />

Handbook. New York: McGraw Hill.<br />

Moody's Investor Services. (2010a). Moody's Speculative Grade Liquidity Ratings. Retrieved 31/10/10. Available at<br />

www.moody's.com<br />

Moody's Investor Services. (2010b). Research & Ratings. Retrieved 31/10/2010. Available at www.moodys.com<br />

Pflug, G. (2000). Some Remarks on Value-at-Risk and Conditional-Value-at-Risk. In R. Uryasev (Ed.),<br />

Probabilistic Constrained Optimisation: Methodology and Applications. Dordrecht, Boston: Kluwer Academic<br />

Publishers.<br />

Rockafellar, R. T., & Uryasev, S. (2002). Conditional Value-at-Risk for General Loss Distributions. Journal of<br />

Banking and Finance, 26(7), 1443-1471.<br />

Rockafellar, R. T., Uryasev, S., & Zabarankin, M. (2006). Master Funds in Portfolio Analysis with General<br />

Deviation Measures. Journal of Banking and Finance, 30(2), 743-776.<br />

Samanta, P., Azarchs, T., & Hill, N. (2005). Chasing Their Tails: Banks Look Beyond Value-At-Risk. ,<br />

RatingsDirect.<br />

Standard and Poor's. (2009). Default, Transition, and Recovery: Annual Global Corporate Default Study and Rating<br />

Transitions.<br />

Uryasev, S., & Rockafellar, R. T. (2000). Optimisation of Conditional Value-at-Risk. Journal of Risk, 2(3), 21-41.<br />

Wilson, T. C. (1998). Portfolio Credit Risk. Economic Policy Review October, 4(3).<br />

228


229


COMMODITIES MARKETS<br />

230


231


VOLATILITY DYNAMICS IN DUBAI GOLD FUTURES MARKET<br />

Ramzi Nekhili Faculty of Finance and Accounting, University of Wollongong in Dubai, UAE<br />

Michael Thorpe, School of Economics and Finance, Curtin University of Technology, Perth, Australia<br />

Email; m.thorpe@curtin.edu.au<br />

Abstract. This study explores the volatility dynamics of gold futures traded on the Dubai Gold and Commodities Exchange. We test<br />

the effffiects of margin trading reform implemented by the Emirates Securities and Commodities Authority on the dynamic relationship<br />

between the daily gold futures volatility and volume, open interest, and futures returns. We find that volatility dynamics with respect<br />

to volume and return are consistent with other futures markets patterns but not with the open interest, especially after the reform.<br />

Moreover, the reform has decreased trading volume and open interest and increased gold futures volatility.<br />

Keywords: Volatility; Dubai gold futures; Margin trading<br />

JEL classification: C30; G10<br />

1 Introduction<br />

The Dubai Gold and Commodities Exchange (DGCX) was established in 2005 and is the first international online<br />

derivatives market in the Middle East. An electronic trading platform allows members around the world to direct<br />

market assess. DGCX is jointly owned by the Dubai government’s Dubai Multi Commodities Center (DMCC),<br />

Financial Technologies (India) Limited, and the Multi Commodity Exchange of India Limited (MCX). DGCX<br />

trades futures contracts on gold, silver, fuel oil, steel, freight rates, cotton and three major currencies. Futures<br />

options contracts are traded for gold only.<br />

Dubai, the “City of Gold”, has historically been a major trading centre for spot gold, with the Dubai Multi<br />

Commodities Center (DMCC) estimating that in 2006, Dubai’s import and export of gold amounted to 489 and 274<br />

tonnes, respectively. The gold futures contract began trading on November 22, 2005, and the gold options on futures<br />

were introduced on April 30, 2007. The trading volume of the contract has been rising steadily with a total of 71,316<br />

contracts (representing USD<br />

1.5 billion in value) traded in March of 2010 (representing an average of around 4,000 contracts a day). The<br />

contracts are traded on the DGCX’s electronic platform and continuously from Monday through Friday between<br />

8:30 am and 11:30 pm Dubai time, corresponding to 12.30 am to 4:30 pm New York time, 4:30 am to 7:30 pm<br />

London time and 12:30 pm to 3:30 am Singapore time. Hence the operating hours of the market in Dubai overlap<br />

exchanges in other major global centres. The size of the futures contract is 32 troy ounces (1 kg) of 0.995 purity<br />

according to the Dubai Good Delivery Standard. Delivery is made with Dubai Gold Receipt. The contract matures in<br />

bi-monthly intervals, i.e., February, April, June, August, October and December, and the prices are quoted in USD<br />

(per troy ounce).<br />

DGCX is regulated by the Emirates Securities and Commodities Authority (ESCA) which is the regulatory<br />

authority for both the Dubai Financial Market and the Abu Dhabi Securities Market. ESCA has implemented several<br />

regulatory reforms that have impacted the operations of DGCX. These have been aimed at improving the efficiency<br />

of the market, to protect investors from unfair and incorrect practices and to provide regularity and stability for<br />

market trading with a view to ensuring smooth and prompt liquidation of positions. Such reforms will be taken into<br />

consideration to test their effects on the volatility dynamics of the Dubai gold futures market.<br />

In the literature little attention has been paid to emerging markets with most attention paid to the effffiects of<br />

general regulations in the US market (see for example Ma et al., 1993, and Yang et al., 2001). An exception to this<br />

is a study by Chan et al. (2004) which has addressed the futures markets in China. There is a large body of literature<br />

that has looked at the determinants of the volatility of futures prices (see for example Najand and Yung, 1991,<br />

Foster, 1995, and Fung and Patterson, 2001). One area of this research focuses on the volatility of commodity<br />

futures. An early study by Garcia et al. (1986) investigated the impact of lagged volume in five commodity futures<br />

232


contracts on volatility and found significant positive relationships. Bessembinder and Seguin(1993) examined the<br />

link between volatility, volume and open interest of contracts. Their results suggested that trading volume had a<br />

significant positive effffiect on volatility, while open interest had a significant negative effffiect. The study by Chan et<br />

al. (2004) examined the daily volatility of four futures contracts on Chinese futures exchanges and found diffffierent<br />

patterns of volatility under diffffierent government regulatory reforms. Their results for volume and open interest<br />

effffiects are consistent with the literature, with positive and negative relationships respectively. Regulation is also<br />

shown to amplify the effffiects of these factors. The study also reports that both positive and negative returns are<br />

positively related to volatility, with negative returns associated with a more significant impact.<br />

In this paper, we examine the volatility dynamics of Dubai gold futures with respect to changes in variables<br />

such as volume, open interest, and futures returns. The study also seeks to shed light on the impact of margin trading<br />

reform implemented by ESCA on the volatility dynamics of Dubai gold futures. The study will be of practical<br />

benefit for the evolving finance industry in the United Arab Emirates (UAE) and the region generally. Relatively,<br />

little analysis of financial markets in the Gulf region has been undertaken to date and no study has been conducted of<br />

Dubai futures markets. It is expected that this investigation will provide a platform for further on-going research in<br />

other derivatives markets which have been recently established in the UAE. Moreover, the relevance of this study<br />

stems from the importance policymakers and regulators place on improving the efffficiency of financial markets and<br />

from the need for market participants to improve their understanding of emerging future markets.<br />

The rest of the paper proceeds as follows. Section 2 presents the framework within which we conduct our<br />

empirical estimation. Section 3 describes our data and presents our results and Section 4 checks the robustness and<br />

consistency of our results. We conclude in Section 5.<br />

2 Methodology<br />

To investigate the dynamics of Dubai gold futures volatility, we first measure the daily volatility of futures prices<br />

using two approaches of extreme-value method such as Parkinson (1980) and Rogers and Satchell (1991). The<br />

Parkinson measure uses daily high and low futures prices and Rogers-Satchell measure incorporates daily opening<br />

and closing futures prices in addition to Parkinson’s instruments. Respectively, they go as follows:<br />

2<br />

1 � High � t<br />

V � log( ) , (1)<br />

t � �<br />

4 log 2 � Lowt<br />

�<br />

� High High � � Low Low �<br />

t t t t<br />

V � log( ) log( ) log( ) log( ) (2<br />

t � � � � �<br />

)<br />

� Open Close Open Close<br />

t t � � t t �<br />

Next, we examine how gold futures volatility relates to volume, open interest, and positive and negative returns.<br />

We envisage using lagged volume as an indicator of flow of information, to avoid simultaneity relationships with<br />

volatility, and open interest as an indicator of market depth. Chan et al. (2004) use open interest as level of hedging<br />

activities that could mitigate futures volatility. In addition, to test whether there is evidence of asymmetric effffiects of<br />

returns on volatility, we include the positive and negative returns in the volatility specification. The regression<br />

specification is as follows:<br />

V � � �� X � � � X � e , (3)<br />

4<br />

t 0 1 1t �1<br />

it it t<br />

i�2<br />

where X1t−1 is the log trading volume of the futures contract at time t − 1, X2t is the log open interest, X3t is<br />

the positive future returns at time t equivalent to max[0, Rt]; and X4t is the negative future returns at time t<br />

equivalent to min[0, Rt], with Rt being the daily futures returns measured as the logarithmic diffffierence between two<br />

consecutive futures prices.<br />

Specification (3) tests a number of hypotheses. First, we can see whether the effffiect of volume on volatility is<br />

positive, α1 > 0. Second, we test the market depth effffiect on volatility, α2 < 0. Finally, we test whether good news<br />

and bad news have effffiects on volatility by checking respectively the coefffficient signs as α3 > 0and α4 < 0.<br />

We also directly addresses the regulatory reform concerning margin trading undertaken by the Emirates<br />

233


Securities and Commodities Authority during the study period, since the inception of the futures contract. We<br />

conduct our analysis over the entire sample period as well as over two sub-periods that are pre and post reform. The<br />

reform concerns changes in the regulations on margin trading that took effffiect in June 2008. Among many other<br />

decisions, ESCA has set an initial margin of not less than 50% of the market value of the securities traded on<br />

margin, as well as a maintenance margin of not less than 25% of the same traded market securities. In addition,<br />

DGCX imposes an extra margin call on all open positions when volatility is high.<br />

Our proposition is that the margin trading reform reduces the trading volume and open interest and has an<br />

impact on gold futures price volatility as through changes in market liquidity and depth. We expect that the volatility<br />

dynamics represented by Eq.(3) will display diffffierent results before and after the reform. Such proposition has been<br />

highlighted by Tesler (1981) who showed that an increase in cost of trading may lower the volume and open interest<br />

and hence liquidity, which may, in turn, increases future price volatility.<br />

The regression technique adopted for the analysis is the generalized method of moments (GMM) of Hansen<br />

(1982). This approach has been widely used in the literature to study the determinants of futures volatility. For a<br />

recent example, see Holmes and Tomsett (2004). This technique addresses the issue of time-varying conditional<br />

heteroskedasticity as well as the presence of any unconditional distributional properties. It also handles<br />

contemporaneous relationships between the variables of interest and provides autocorrelation consistent estimates.<br />

In considering both heteroskedasticity and autocorrelation, the Newey-West (1987) method for selecting the<br />

bandwidth is employed. We also take instruments from the independent variables and test whether specification (3)<br />

is exact using the J-statistic to test for overidentifying restrictions. Finally, both volatility measures are used to check<br />

the robustness of the results.<br />

3 Data and Results<br />

The data consists of daily data of gold futures contracts traded on the Dubai Gold and Commodities Exchange.<br />

These contracts are the most active ones in Dubai. The sample period covers the contracts traded from May 2007 till<br />

June 2010. The data is collected from the DGCX and includes daily high future price, daily low price, opening price,<br />

closing price, trading volume and opening interest.<br />

Table 1 provides descriptive statistics for the full sample and the two sub-periods. A preliminary investigation<br />

of the raw data reveals that the average daily return is positive as seen by the diffffierence between the mean of<br />

positive returns and the absolute mean of negative returns. Post-reform trading volume and open interest have<br />

decreased comparing to pre-reform figures. The volatility, measured by both Parkinson and Rogers and Satchell, has<br />

increased in magnitude (around 0.01%) after the margin trading reform. This finding goes in line with Tesler’s<br />

(1981) argument that an increase in margin requirements may serve to increase futures price volatility if market<br />

liquidity is reduced.<br />

Table 2 displays the GMM estimation results using both Parkinson (Panel A) and Rogers-Satchell (Panel B)<br />

volatility measures. Over the full period of the study the results indicate a negative relation between negative futures<br />

returns and volatility and a positive relation between positive returns and volatility. This is in line with the findings<br />

in the literature. There is an asymmetric effffiect observed, however, when considering the significance of the<br />

coefffficients of positive and negative returns. The absolute magnitude of positive returns is higher than the negative<br />

returns, which does not conform to the evidence from developed markets, but is similar to the results found for the<br />

Chinese market by Chan et al. (2004). Unexpectedly, the opening interest coefffficient is significant with a positive<br />

sign but this effffiect may be due to influences due to the selection of the full period, masking sub-period effffiects.<br />

Finally, and in line with the expected results from the literature, the volume of trades is positively related to gold<br />

futures volatility. These results are similar for both measures of volatility. Nevertheless, the adjusted R 2<br />

is higher<br />

using the Parkinson volatility measure which provides support for this approach in modelling the daily volatility of<br />

gold futures in this study. Moreover, the J-statistic is low enough to reject the hypothesis of overidentifying<br />

restrictions imposed in the regression.<br />

In looking at the two sub-periods, the asymmetric effffiect of returns is again seen to be significant with respect to<br />

volatility. As observed for the full period, the magnitude in absolute terms is higher for the positive returns than for<br />

234


the negative returns. It seems that market participants react more to good news than bad news. The effffiect of volume<br />

on volatility, although positive, becomes insignificant after the reform. This is most likely due to the fact that the<br />

flow of information was improved and that speculation by day traders had lost momentum.<br />

Open interest is seen to be negatively related to volatility pre-reform, as expected, but exhibits a positive<br />

relationship post-reform. It would appear that the level of hedging activity mitigated the volatility during the earlier<br />

period, but that the regulation on margin trading in 2008 resulted in the positive relationship between open interest<br />

and gold futures price volatility. This signal of a decrease in the market depth is also equivalent to a decrease in<br />

liquidity and consequently has increased price volatility. Given that DGCX imposes an extra margin call on all open<br />

positions in time of high volatility, the resultant increase in cost seems to make hedgers not hold their positions for<br />

long, with the result that their investments become speculative positions rather than hedge positions. This in turn<br />

could not contribute to stabilizing Dubai gold futures prices.<br />

Overall, it appears that gold futures volatility has increased significantly due to the implementation of the<br />

margin trading regulatory reform. Market dynamics with respect to volume and return are consistent with other<br />

futures markets patterns but not with the open interest, especially after the reform took effffiect. This can be a feature<br />

of an emerging future market such as the one of Dubai.<br />

4 Additional Robustness Checks<br />

Having used unconditional volatility estimates, we could think of presenting the results with conditional volatility<br />

such as the GARCH-type to see how the basic investigation of the analysis could be altered. We first assume that the<br />

returns follow a martingale with drift and GARCH(1,1) volatility specification, then we extract the conditional<br />

GARCH-type volatility and run a regression on the variables of interest. The following model highlights this<br />

specification:<br />

2<br />

R � � � u , and u � iid(<br />

0,<br />

� ) (4)<br />

t t t t<br />

2 2 2<br />

� t � a � b� t�1 � cut�1.<br />

The assumption of i.i.d. innovations is almost certain to be violated but may not limit the purposes of the<br />

analysis. Nevertheless, and being aware of the non-normality of the innovations, we assume Student-t distribution of<br />

the return innovations. Table 3, Panel A, displays the estimation results and shows significant ARCH and GARCH<br />

effffiects. This tells that there is volatility persistence in the Gold Furures returns indicating that large volatility<br />

increases do last at least the following day. Panel B of Table 3 shows the results of the estimation of Eq. (3) using<br />

the conditional volatility extracted from specification (4). The results confirm the inferences obtained from the<br />

analysis conducted with the unconditional volatilities with one exception that is the significant negative relationship<br />

between the lagged trading volume and the volatility.<br />

Furthermore, we undertake an additional robustness check by separating out specific gold futures contracts<br />

traded within the study period to highlight the effffiect of margin trading switching regime. To make sure accounting<br />

for the margin trading reform effffiect, we choose the contracts that are maturing in August 2008 and August 2009.<br />

The starting trading dates of these contracts are, respectively, August 8, 2007, and August 8, 2009. In the DGCX,<br />

the last trading day is the business day six days prior to delivery; therefore the last trading days for the contracts are<br />

respectively, July 31, 2008, and July 31, 2009.<br />

Table 5 displays the GMM estimation results using only Parkinson measure of volatility. In every future<br />

contract, we find that the asymmetries in the return effect on volatility are statistically significant. We also find no<br />

effect of open interest on the volatility dynamics in August 2009 contract, similar to previous post-reform results. In<br />

addition, and consistent with the previous results, the patterns found in the volatility with respect to volume, open<br />

interest, and returns are similar to the ones found using the complete time series.<br />

235


5 Conclusion<br />

This paper investigates factors influencing the volatility of the gold futures contracts traded on the Dubai Gold and<br />

Commodities Exchange (DGCX). The study looks at the period May 2007 to June 2010. The effffiect on market<br />

volatility of the margin trading reform introduced in June 2008 is also considered.<br />

In line with expectations, the volume of trading, which can be considered a proxy for speculative market<br />

activity, is observed to be positively linked to volatility. The effffiect of open interest, a measure of market depth or<br />

hedging activity, is shown to vary over the two sub-periods considered. Pre-reform, the results indicate a negative<br />

relationship with volatility in line with expected findings. However, a positive relationship is evident in sub-period<br />

after the reform. The results also suggest that the regulation of margin trading has the effffiect of raising market<br />

volatility.<br />

Overall, the study also found, in line with the literature, that there was an asymmetric effffiect of returns on<br />

volatility. Negative returns were associated with lower volatility while positive returns were positively related to<br />

volatility. However, an unexpected result was that positive returns appear to have a greater impact on volatility than<br />

negative returns. There appears to be more reaction to good news than bad.<br />

Future research avenues can be addressed such as testing the predictive power and information content of gold<br />

future volatility relative to other measures such as option implied volatility in explaining the future realized<br />

volatility.<br />

6 References<br />

Bessembinder, H. & Seguin, P.J., 1993. Price Volatility, Trading <strong>Volume</strong>, and Market Depth: Evidence from<br />

Futures Markets. Journal of Financial and Quantitative Analysis, 28, 21-39.<br />

Chan, K.C., Fung, H.G., & Leung, W.K., 2004. Daily volatility behavior in Chinese futures markets. Journal of<br />

International Financial Markets, Institutions, and Money, 14, 491-505.<br />

Foster, A.J., 1995. <strong>Volume</strong>-volatility relationships for crude oil futures markets. Journal of Futures Markets, 15,<br />

929-951.<br />

Fung, H.G. & Patterson, G.A., 2001. Volatility, global information, and market conditions: a study in futures<br />

markets. Journal of Futures Markets, 21, 173-196.<br />

Garcia, P., Leuthold, R.M., & Zapata, H., 1986. Lead-lag relationships between trading volume and price variability:<br />

new evidence. Journal of Futures Markets, 6, 1-10.<br />

Holmes, P. & Tomsett, M., 2004. Information and noise in U.K. futures markets. Journal of Futures Markets, 24,<br />

711-731.<br />

Ma, C.K., Wenchi Kao, G., & Frohlich, C.J., 1993. Margin requirements and the behavior of silver futures prices.<br />

Journal of Business Finance and Accounting, 20 (1), 41-60.<br />

Najand, M. & Yung, K., 1991. A GARCH examination of the relationship between volume and price variability in<br />

futures markets. Journal of Futures Markets, 11, 613-621.<br />

Newey, W. & West, K., 1987. A simple positive semi-definite heteroskedasticity and autocorrelation consistent<br />

covariance matrix. Econometrica, 55, 703-708.<br />

Parkinson, M., 1980. The Extreme Value Method for Estimating the Variance of the Rate of Return. Journal of<br />

Business, 53, 61-65.<br />

Rogers, L.C.G. & Satchell, S.E., 1991. Estimating variance from high, low, and closing prices. Annals of Applied<br />

Probability, 1, 500-512.<br />

Tesler, L.g., 1981. Margins and futures contracts. Journal of Futures Markets, 1, 225-253.<br />

Yang, J., Bessler, D.A., & Leatham, D.J., 2001. Asset storability and price discovery of commodity futures markets:<br />

a new look. Journal of Futures Markets, 21, 279-300.<br />

236


237


238


239


240


STOCK RETURNS AND OIL PRICE BASED TRADING<br />

Michael Soucek, European University Viadrina,Germany<br />

Email: soucek@europa-uni.de,<br />

Abstract. While the linkage between the energy prices and stock prices has been widely investigated, the literature discussing<br />

practical implications of this relationship is rather scarce. The aim of this paper is to prove, if the relationship between these two<br />

markets is economically exploitable. As a proxy for the equity returns I use MSCI reinvestment index for five developed countries as<br />

well as MSCI emerging markets index. For oil prices the Brent type oil is considered. Since the analyzed return series are stationary<br />

and Granger causality between oil and stock returns is observable, trading strategy based on the bivariate VAR(p) model will be<br />

employed. The trading rule provides significant abnormal returns measured by Sharpe ratio and Jensen’s alpha for weekly and<br />

monthly data over the last twenty years for developed markets. The standard deviation of the oil based trading strategy is significantly<br />

lower than that for the buy and hold strategy. The results are robust to sub-period analysis and variation in the in-sample period. Using<br />

the information from the oil market to predict stock returns, the results proved that the markets does not fulfill the requirements of<br />

semi-strong market efficiency in the sense of Jensen (1978).<br />

Keywords: Stock Returns, Oil Prices, VAR, Market Efficiency, Trading strategy<br />

JEL classification: G11, G14, Q40<br />

1 Introduction<br />

Oil prices and their effect on the stock markets is an object of ongoing research in academic literature. From the<br />

asset pricing point of view, identifying what drives equity prices is the key topic. According to the theory, the value<br />

of a firm equals the discounted sum of the expected future cash flows. The cash flows themselves are assumed to be<br />

influenced by a company's relative economic condition. Possible drivers of equity prices are for instance, interest<br />

rates, exchange rates, income, revenue or production costs and diverse macroeconomic factors such as GDP or<br />

prices of resources. Because oil is one of the important inputs for almost all kinds of industries, the hypothesis about<br />

the link between the oil and stock market seems reasonable. The aim of this paper is to draw possible implications<br />

for market participants. Most studies on the relationship between oil and stock prices conclude by making a<br />

statement about the connection and causality between the time series data. While the majority of recent research<br />

studies agree on the existence of a link between oil price and stock price (Driesprong, Jacobsen, & Maat, 2008;<br />

Sadorsky, 1999; Park & Ratti 2008; Basher & Sadorsky 2006; Nadha & Faff 2008), it is still possible that even<br />

though the results are statistically significant, they are not economically exploitable. This paper introduces a trading<br />

strategy based on the underlying VAR( ) model containing oil returns as a predictor of future stock returns. It could<br />

be shown, that for monthly and weekly returns, a trading technique based on a bivariate VAR( ) model can provide<br />

investors with abnormal returns and has comparably favorable risk-return characteristics relative to the buy-and-hold<br />

policy. This trading strategy outperforms the market return measured by Jensen's alpha and Sharpe ratio, and<br />

exhibits significantly lower standard deviation of its returns when compared to the market index. The results hold<br />

especially for developed markets. The emerging market index does not seem to be strongly affected by the oil price<br />

changes. To confirm the validity of our results, the study contains a necessary robustness check of the trading rule<br />

itself. This test has been frequently omitted in the literature.<br />

The results of the study provide not only valuable information for investors, hedgers and other market<br />

participants but also contribute to the ongoing research on market efficiency. Fama (1970) provides the textbook<br />

definition of an efficient market, which has been broadly cited in the literature: "A market in which prices always<br />

fully reflect available information is called efficient." A more detailed definition has been developed by Jensen<br />

(1976): A market is efficient with respect to an information set, if it is impossible to make economic profits by<br />

trading on the basis of this information set. Because economic profits are defined as risk-adjusted returns after<br />

deducting transaction costs, Jensen's definition implies that the efficient market hypothesis (EMH) can be tested by<br />

considering the net profits and risk of trading strategies based on available information. Jensen introduced three<br />

types of market efficiency: Weak form efficiency, in which the information set is limited to the information<br />

contained in the past price history of the market; Semi-strong form efficiency, where the information set is all<br />

information that is publicly available at certain time; and strong form efficiency, where the information set is all<br />

public and private information available at certain time. A good survey of the results of a technical analysis used for<br />

241


testing the efficiency market hypothesis, as well as a discussion on the theoretical concepts related to technical<br />

trading is provided by Park (2007). In this paper, information from the oil market was used to trade equity, and the<br />

obtained results support the hypothesis that the developed markets does not fulfill the requirements of a semi-strong<br />

efficient market. This fact could not be proved for the emerging markets index. The paper is structured as follows. In<br />

section 2, the empirical methodology for investigating the relationship between the series as well as the dataset is<br />

introduced. Section 2 also features the design and implementation of a trading technique based on the bivariate<br />

VAR( ) model. The out-of-sample results are described in section 3. Section 4 concludes the paper.<br />

2 Model and Data<br />

2.1 Model<br />

Based on Sims’s findings (Sims, 1980) the relationship between two stationary time series can be empirically<br />

investigated by the vector autoregressive model. In fact, this approach has often been used in academic literature to<br />

evaluate the relationship between stock and oil prices. In the most basic bivariate case, a VAR consists of two<br />

equations and can be written as follows:<br />

t<br />

P<br />

1 � �<br />

P<br />

1p<br />

yt<br />

� p � ��1<br />

p xt<br />

� p<br />

p=<br />

1<br />

p=<br />

1<br />

y = c � � �<br />

(1)<br />

t<br />

P<br />

P<br />

2 � � 1p<br />

xt<br />

� p � ��<br />

1p<br />

yt<br />

� p<br />

p=<br />

1<br />

p=<br />

1<br />

x = c � � �<br />

(2)<br />

Where c1 and c1 denote the constants and et is white noise error term. yt and xt denote the observations of two<br />

stationary time series. p represents the used lag length. In the following study, one of the equations describes the oil<br />

price dynamics, while the other represents the analyzed equity index movements. One of the advantages of using a<br />

VAR is that in doing so, it is not necessary to provide prior assumptions about which variables are exogenous and<br />

which are endogenous. In this model, all variables are treated as endogenous. Each variable depends on the lagged<br />

values of all other variables in the system. Because the model is sensitive to the choice lag-length, the appropriate<br />

number of lags is usually estimated using the Akaike Information Criterion (AIC) or Schwarz Bayesian Criterion<br />

(SBC). In the case of non-stationary time series, which are integrated of order one, i.e. (I(1)), the VAR in first<br />

differences can be estimated, and conventional asymptotic theory can be used for hypothesis testing.<br />

2.2 Data<br />

This study employs monthly data from January 1990 to March 2011. The Morgan Stanley Capital International<br />

(MSCI) European reinvestment index has been used to track the development of the equity market. I consider MSCI<br />

indices for the USA, European Union, Germany, France, Canada and emerging markets MSCI. The log returns have<br />

been calculated as a difference in log prices. I consider monthly prediction intervals calculated at the end of the<br />

month and weekly prediction intervals calculated on Wednesday of each week, to avoid the impact of higher trading<br />

volumes on Monday and Friday. The one-month Eurodollar London Interbank Offering Rate (LIBOR) was used for<br />

the approximation of the risk free interest rate. The Brent type crude oil in US dollar is used as a proxy for the oil<br />

price. All prices has been converted to US dollar.<br />

2.3 Unit Root Test<br />

The first step is to investigate the stationarity of the observed univariate time series, the existence of which is the key<br />

assumption of the OLS based VAR( ) estimation. I adopt the Augmented Dickey-Fuller (ADF) test for unit roots to<br />

evaluate the property. For all time series, I run following regression:<br />

� �<br />

p=<br />

1<br />

P<br />

t<br />

t<br />

yt = � � �t<br />

� �yt<br />

�1 � � �yt<br />

� p 1 � � t ,<br />

(3)<br />

242


where the optimal lag p is based on the AIC. The null hypothesis of the ADF-test is g = 0. If the variable is<br />

integrated of order one yt-1, it provides no relevant information for predicting yt outside of that already obtained in<br />

Dyt-p. In such a case, the null hypothesis cannot be rejected; i.e., the time series y has a unit root. The t-values are<br />

compared with the Dickey-Fuller (DF) t-values and summarised in table 1. As the results indicate, all log price<br />

series are integrated of order one, which implies that the log return time series are stationary. The result has been<br />

verified using Phillips-Perron univariate test and is in line with the general accepted fact that the stock return time<br />

series are stationary.<br />

2.4 Granger causality<br />

Analysis of the estimated coefficients themselves sometimes reveals uninteresting results. Bivariate VAR models<br />

are in most cases used to test Granger causality and to determine whether lags of one variable help to explain the<br />

current value of some other variable. The results provide information about the dynamics of the data. Because the<br />

appropriate model for the relationship between log prices of stocks and oil seems to be VAR( ) in first differences,<br />

which is nothing but VAR( ) in a stationary log return series, we can analyze Granger causality between the returns<br />

of Brent oil and analyzed equity indices. The null hypothesis of the test is that the variable does not Granger cause<br />

variable x and vice versa. To verify causality, I estimate unrestricted equation (1) and equation (2) under the<br />

restriction and then use the F-Statistics to test for the equality of both models. If the explanatory power<br />

differs, it could be concluded that Granger causes . Because relationship is not symmetric, Granger causality in<br />

one direction does not imply causality in the other. The test results for the causality relationships are summarized in<br />

table 1. The column 3 and 4 in table 1contain the p-values of the Granger causality test. Additionally, the data have<br />

also been tested for instantaneous causality. If, in period , adding to the information set improves the forecast<br />

of , there is instantaneous causality between the two variable (Lütkepohl, 2005). Because this concept of<br />

causality is symmetric, we speak about instantaneous causality between and , and deem the direction<br />

irrelevant. Last column summarizes the p-values for the test for instantaneous causality. The obtained statistically<br />

significant link for the stock indices is also in line with common findings.<br />

MSCI Index Lag ADF Oil -> Market Market -> Oil Inst., Causality<br />

Weekly Returns<br />

United States 7 -12.447 0.01091 0.04831 0.0778<br />

European Union 3 -16.169 0.02116 0.00951 0.0001<br />

Gerrnany 3 -16.661 0.01026 0.01449 0.0309<br />

France 3 -16.565 0.01301 0.00638 0.0079<br />

Canada 8 -9.506 0.29070 0.00004 0.0001<br />

Emerging Markets 8 -9.172 0.01650 0.00001 0.0090<br />

Monthly Returns<br />

United States 1 -10.878 0.0775 0.3208 0.3766<br />

European Union 4 -6.2122 0.2194 0.0143 0.3270<br />

Gerrnany 1 -11.555 0.3450 0.0390 0.8450<br />

France 3 -7.028 0.3460 0.0740 0.3340<br />

Canada 1 -10.052 0.3430 0.9178 0.0001<br />

Emerging Markets 1 -9.348 0.0825 0.1399 0.0169<br />

2.5 Trading Strategy<br />

Table 1: ADF and Granger causality<br />

Studies on the link between oil prices and stock markets typically end with a statement concerning the causal<br />

relationship between the two time series, concentrating on the strength, direction and significance of the<br />

relationship. Literature on the implications for investor decisions or portfolio diversification is rather scarce. Geman<br />

& Kharoubi (2008), for instance, discuss a copula approach for optimal maturity choice via diversification with oil<br />

futures, and in addition provide a valuable survey of recent literature on the topic. Arouri & Nquyen (2010) shows<br />

that diversification with oil spot price-based products can improve the risk-return profile of stock portfolios. While<br />

the majority of recent research studies agree about the existence of the link between oil price and stock price, it is<br />

possible that while the results are statistically significant, they are not economically exploitable. Existing literature<br />

243


pays little attention to this fact. In a recent paper (Driesprong, Jacobsen & Maat, 2008) has been shown, that a<br />

simple trading rule based on one-month lagged oil returns yields a better risk-return profile than a buy-and-hold<br />

strategy, and shows that the economic significance of the oil price shocks (Hong, Torous & Valkanov, 2007) is<br />

stronger than for other possible predictors, such as interest rates or dividend yields. This paper expands upon the<br />

idea of exploiting the relationship between the two markets and shows that a bivariate-VAR-based trading rule<br />

strongly decreases the risk of an investment and yields abnormal returns for the investor in the case of developed<br />

markets. The results are robust and valid for weekly and monthly returns over the last twenty years. Furthermore, the<br />

paper extends the literature on market efficiency based on Jensen (1976). The market is efficient with respect to an<br />

information set if, it is impossible to make economic profits by trading on the basis of this information set. The oil<br />

price changes belong to what is classified as publicly available information and, from this point of view, places this<br />

study as a test of the semi-strong market efficiency hypothesis. The trading strategy is to exploit one step ahead<br />

forecasts of the stock returns based on the movement in the oil spot prices. To obtain the forecasts bivariate rolling<br />

VAR(p) model was employed:<br />

�<br />

P�1<br />

�<br />

P�1<br />

�<br />

p=<br />

0<br />

p=<br />

0<br />

~<br />

y = ˆ ˆ<br />

ˆ<br />

t 1 c1<br />

� � p yt<br />

� p � � p xt<br />

� p .<br />

(4)<br />

yt-p stands for the index stock returns and xt-p describes past changes in the prices of Brent type oil.<br />

~<br />

yt�1 is the one<br />

period ahead predicted stock index return. Parameters �ˆ and �ˆ are estimated from the 48 month rolling in-sample<br />

window. Optimal lag length p has been estimated for every single rolling window using AIC.The trading rule is<br />

designed to generate a signal for the investor to invest in the equity market or LIBOR bonds each month. If the<br />

forecasted equity market return<br />

~<br />

yt�1 is greater than the current risk-free rate, it generates a signal to invest in the<br />

equity market; otherwise, it signals that the investor should invest in bonds. Similar trading rules have been used in<br />

various studies on technical trading (Park, 2007). This strategy is easily implemented using index futures or another<br />

instrument tracking the index such as an exchange traded fund (ETF). Transaction costs are assumed to be 0.1%<br />

which is consistent with the literature on technical trading. The returns were compared with the returns acquired by<br />

holding the market index.<br />

To evaluate the results and to show that they are not spurious, diverse tests and robustness checks have been<br />

implemented to measure the applicability of the trading strategy. I discuss the risk and return characteristics of<br />

strategy using Sharpe ratio and standard deviation comparison. Furthermore, CAPM-based Jenson's alpha was<br />

estimated running following regression:<br />

OS<br />

t<br />

Index<br />

f = i t f t<br />

r � r � � � ( r � r ) � � ,<br />

(5)<br />

Where rt OS describes the return of the oil strategy in time t and rt Index the return of the corresponding market index;<br />

this is nothing but the return of the of the buy-and-hold strategy. The estimate bi is the systematic risk of the trading<br />

strategy compared to the market return and ai is the Jenson's alpha. To account for possible heteroscedasticity and<br />

autocorrelation (HAC) in the explanatory variables, the test statistics are calculated using HAC standard errors.<br />

Descriptive statistics for the trading rule and the results of the Jensen’s alpha regression are summarized in the table<br />

2 for weekly data, and the results obtained for monthly observations are provided in table 3.<br />

3 Summary<br />

3.1 Results<br />

For all analyzed indices for developed markets, VAR-based trading strategies generate higher mean returns and<br />

smaller standard deviations than the buy-and-hold policy. Using the F-Test for a comparing of variances implies that<br />

the variance of the buy and hold strategy is significantly larger that of the trading rule. Remarkably large Sharpe<br />

ratios also support the hypothesis of a more favorable risk-return relationship of the oil-based strategy compared to<br />

the buy-and-hold policy. Tables 2 and 3 describe more precisely the economic significance of the trading rule. The<br />

tables provide comparism of descriptive statistics of the buy-and-hold (BH) and oil based trading strategy. Mean is<br />

244


given in % per month. SD stands for standard deviation, SR for Sharpe ratio in %. The table also contain the<br />

estimates of the Jenson's alpha regression, as well as the tests for reasonable trading signals. The t-values are<br />

calcultated using (HAC) standard errors. Jenson's alpha is also given in % per anum. For analyzed indices, the<br />

Jensen's alpha is significantly positive. Implementation of this strategy for the USA MSCI equity index would yield<br />

on average risk adjusted 6.4% per anum more than the buy-and-hold strategy. Furthermore, beta coefficients, which<br />

measure the systematic risk of the trading strategy, are significantly smaller than one, confirming that the oil strategy<br />

is substantially less risky than investing in the market index. A good trading strategy should provide reasonable<br />

signals for going in or out of the stock market. If the rule implies holding the market index related product, one<br />

should expect significant positive returns of the market index (H1). A good trading strategy should be capable of<br />

indicating the bullish market. On the contrary, if the rule implies the optimal step is to invest in bonds, one might<br />

expect a downward trend in the market index and negative returns on the equity market. If the return of the market<br />

index in this period is significantly positive, the trading technique is missing potential sources of additional gains. In<br />

this sense, one might expect the market returns to be less than or equal to zero in the periods following the trading<br />

strategy's sell signal (H2). Columns H1 and H2 in tables 2 and 3 contain the t-values for coresponding tests. I<br />

observe that for all indices, the trading strategy yields positive returns after a buy signal, and after a sell signal, the<br />

market is bearish but not significant for the majority of indices. It is generally remarkable, that even if we unable to<br />

prove Granger causality for all analyzed pairs, especially for monthly observations, the trading technique provides<br />

good results for all equity indices. A possible explanation for this is that while the causality itself was tested for the<br />

whole sample, only the relationship in the short in-sample is relevant for the trading rule. The profitability of the<br />

trading strategy indicates that the power of causality relationships could vary over different subsamples. The<br />

exploitability of the trading rule gives further evidence of the significance of the relationship.<br />

MSCI Index Mean BH SD BH SR BH Mean SD SR<br />

United States 0.118 2.411 1.972 0.231 1.852 8.658<br />

European Union 0.096 2.819 0.877 0.236 1.911 8.661<br />

Gerrnany 0.107 3.367 1.085 0.288 2.388 9.110<br />

France 0.105 3.167 1.083 0.199 1.083 5.581<br />

Canada 0.192 3.021 4.027 0.285 2.220 9.647<br />

Emerging Markets 0.091 3.192 0. 622 0. 066 2.064 -0.229<br />

H1 H1 � t(� ) � t( � )<br />

United States 3.193 -1.535 0.132 2.993 0.594 11.258<br />

European Union 3.203 -1.569 0.154 2.969 0.462 8.162<br />

Gerrnany 3.235 -1.905 0.199 3.288 0.507 8.811<br />

France 2.200 -0.870 0.110 2.060 0.527 9.494<br />

Canada 3.565 -1.040 0.148 2.575 0.542 7.967<br />

Emerging Markets -0.068 0.573 -0.013 -0.216 0.418 4.745<br />

3.2 Robustnes checks<br />

Table 2: Weekly Returns Trading Strategy<br />

The trading decision is based on the comparison of the forecast generated by VAR(p) and risk-free rate represented<br />

by LIBOR. It is theoretically possible that the signal exhibits predictive power because of expected return estimates<br />

or because LIBOR itself has significant predictive power for the future market development. In the latter case, it<br />

would not be possible to conclude that the oil price based strategy really outperforms the market. Even if it is<br />

essential to analyze the rule when evaluating the results of the trading technique, verifying the validity of the trading<br />

rule has been frequently omitted in the literature. In this study, to prove whether or not there is predictability power<br />

of interest rates on the stock returns, I run a simple regression model where for the sample period from 1990 to<br />

2011:<br />

I<br />

t<br />

r = � � �<br />

(6)<br />

i � � i LIBORt<br />

�1<br />

where ai is a constant and ei is the error term for the sample period from 1990 to 2011. For instance for the US stock<br />

market HAC standard errors based t-statistics for monthly data is -0.178 and for weekly returns and weekly risk free<br />

rates -0.091. This implies a lack of significant predictive power of the US term structure. The regression results are<br />

245<br />

t


insignificant for analyzed indices. To conclude, this trading technique is suitable for exploiting the oil price-based<br />

stock return forecasts. Furthermore, to check for the robustness of the empirical results, the additional following<br />

changes were made to the underlying strategy. First, I divided the dataset into two sub-samples such that both<br />

samples have ten year out-of-sample periods (i.e., 1990-2004 and 1996 and 2011). In both cases, the trading rule<br />

provided higher returns and a significant Jenson's alpha. Refining the in-sample period to three or five years<br />

similarly leads to no change in the results.<br />

MSCI Index Mean SD SR Mean SD SR<br />

United States 0.516 4.593 4.490 0.834 3.085 17.010<br />

European Union 0.414 5.387 1.934 0.769 3.363 13.670<br />

Gerrnany 0.472 6.808 2.384 0.827 3.418 15.127<br />

France 0.455 6.031 2.419 0.490 4.497 7.536<br />

Canada 0.819 6.255 8.146 0.638 4.581 7.172<br />

Emerging Markets 0.374 7.264 0.886 0.399 4.894 1.822<br />

Buy Sell � t(� ) � t( � )<br />

United States 3.494 -0.965 0.430 2.518 0.455 5.013<br />

European Union 2.712 -0.740 0.418 2.244 0.398 3.854<br />

Gerrnany 2.735 -0.407 0.475 2.250 0.254 4.208<br />

France 1.612 -0.171 0.258 1.296 0.554 5.506<br />

Canada 1.699 0.937 0.057 0.201 0.531 4.530<br />

Emerging Markets 0.689 0.372 0.059 0.216 0.455 3.974<br />

4 Conclusion<br />

Table 3: Monthly Returns Trading Strategy<br />

This paper contributes to the literature on the oil and stock price co-movements. The impact of the oil price changes<br />

is generally significant but varies for different indices. I observed strong relationship for developed countries.<br />

Studies typically end at this stage of analysis with a statement about significance, power and direction of the<br />

relationship between the commodity and stock time series. This paper extends the literature illustrating how to<br />

exploit the obtained information for trading strategies. The trading rule based on the simple bivariate VAR( ) for<br />

forecasting future stock returns significantly outperforms the buy-and-hold strategy in term of expected return and<br />

risk. The systematic risk of the trading strategy is smaller than one and yields large Sharpe ratios. The standard<br />

deviation of the strategy is significantly lower than the standard deviation of the market return, a fact which supports<br />

the favorable properties of the strategy in comparison to the buy-and-hold policy. The oil-based trading technique<br />

provides an additional significant positive Jenson's alpha, as well as significant positive returns in the periods after a<br />

buy signal for all indices. The emerging market index does not seem to react on the changes in oil price and is not<br />

economically exploitable with a VAR based trading rule. The results are robust to variation in the out-of-sample and<br />

in-sample periods. Additionally, the robustness of the trading technique construction has been verified, which is a<br />

necessary step usually omitted in the literature. In addition to the obvious implication for the market participants, the<br />

results contribute to the discussion on the market efficiency. From the EMH point of view, these results prove that<br />

developed stock markets are not efficient (Fama, 1970). Because publicly available information from other markets<br />

were employed, the semi-strong market efficiency hypothesis (Jensen, 1976) can be rejected.<br />

5 References<br />

Basher, S. A. & Sadorsky P. (2006). Oil price risk and emerging stock markets. Global Finance Journal, 17(2), 224-<br />

251.<br />

Driesprong, G., Jacobsen, B. & Maat, B. (2008). Striking oil: Another puzzle? Journal of Financial Economics,<br />

89(2), 307-327.<br />

Fama, E. (1970). Efficient capital markets: a review of theory and empirical work. Journal of Finance, 25(2), 383-<br />

417.<br />

246


Geman, H. & Kharoubi C., (2008). WTI crude oil futures in portfolio diversification: The time-to-maturity effect.<br />

Journal of Banking & Finance, 32(12), 2553-2559.<br />

Hong, H., Torous, W. & Valkanov, R., (2007). Do industries lead stock markets? Journal of Financial Economics,<br />

83(2), 367-396.<br />

Jensen, M. (1976). Some anomalous evidence regarding market efficiency. Journal of Financial Economics, 6(2),<br />

95-101.<br />

Jones, C. & Kaul G. (1996). Oil and the stock markets. Journal of Finance, 51(2), 463-491.<br />

Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Berlin: Springer Verlag.<br />

Nandha, M. & Faff R. (2008). Does oil move equity prices? a global view. Energy Economics, 30(3), 986-997.<br />

Newey, W. K. & West K. D. (1987). A simple, positive semi-definite, heteroskedasticity and autocorrelation<br />

consistent covariance matrix. Econometrica, 55(1), 703--708.<br />

Park, C.-H. & Irwin S. H. (2007). What do we know about the profitability of technical analysis? Journal of<br />

Economic Surveys, 21(4), 786-826.<br />

Park, J. & R. Ratti (2008). Oil price shocks and stock markets in the U.S. and 13 European countries. Energy<br />

Economics, 30(5), 2587-2608.<br />

Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48(1), 1-48.<br />

Sadorsky, P. (1999). Oil price shocks and stock market activity. Energy Economics, 21(5), 449-469.<br />

Sadorsky, P. (2001). Risk factors in stock returns of Canadian oil and gas companies. Energy Economics, 23(1), 17-<br />

28.<br />

247


OPTIMAL LEVERAGE AND STOP LOSS POLICIES FOR FUTURES INVESTMENTS<br />

Rainer A. Schüssler, Westphalian Wilhelms-University of Munster, Germany<br />

Email: 05rasc@wiwi.uni-muenster.de<br />

Abstract. Our paper presents a framework to assess the portfolio-wide effects of simultaneously applying stop loss limits and<br />

leverage to dynamic mechanical investment strategies in futures markets. We systematically separate the management of portfoliowide<br />

risk and the assumed risk for individual assets. We show the steps needed to formulate and solve a resulting finite horizon<br />

Markov decision problem. The impact of optimally interfacing stop loss rules and leverage is explored using daily data from 1995 to<br />

2010 across 34 different commodity futures markets. Empirical results indicate advantageous wealth distributions for different futures<br />

investment strategies and risk preferences.<br />

Keywords: Risk Management, Dynamic Stochastic Programming, Commodity Futures Trading, Stop loss, Leverage<br />

JEL classification: C6, G11<br />

1 Introduction<br />

We introduce a general framework to assess the portfolio-wide effects of simultaneously applying stop loss limits<br />

and leverage to mechanical dynamic investment strategies in futures markets. We systematically separate the<br />

management of portfolio-wide risk and the risk for individual assets. We show how to formulate the investment<br />

problem as a finite horizon Markov decision problem. The key concept underlying our analysis is the decoupling of<br />

risk management on the portfolio level and the risk taken for individual positions. We consider the underlying<br />

investment strategy as entirely exogenous, that is, portfolio constituents, their direction of exposure (long or short)<br />

and their respective portfolio weights are determined by a mechanical dynamic selection rule for portfolio for<br />

portfolio allocation. Our objective is to provide a method to systematically scan the optimal kind of implementation<br />

for a given strategy.<br />

We apply the method to investment strategies in commodity futures markets for the time from 1995 until 2010.<br />

There are several reasons why we chose commodity futures: first, recent empirical studies in this field show that<br />

active investment strategies for commodity futures based on price-based signals would have been rewarding in the<br />

past. Price-based signals, such as momentum of returns and the shape of the term structure, appear to be able to<br />

detect time-varying risk premia in commodity futures markets (Gorton & Rouwenhorst, 2007; Basu & Miffre,<br />

2009). Secondly, commodity futures markets are an excellent platform for active investment strategies for several<br />

reasons: commodity futures markets are subject to rather small transaction costs ranging from 0.0004% to 0.033%<br />

(Locke & Venkantesh, 1997). Besides, short positions can be taken in commodity futures as straightforwardly as<br />

long positions. Further, commodity futures markets are highly liquid, at least for the nearby and second nearby<br />

contract, and offer the possibility to enhance returns by high leverage levels. The most important reason to apply the<br />

framework to commodity futures data are the stylized facts of commodity futures prices: one stylized fact of<br />

commodity futures prices is their trending behavior characterized by sharp spikes for many markets. This finding<br />

gives rise to the conjecture that there is some time-series dependence for the returns of individual commodity<br />

futures, which can be exploited by applying stop loss rules to individual futures investments. The low correlations<br />

between returns of different (sectors of) commodity futures is a further stylized fact. Therefore, a huge amount of<br />

idiosyncratic risk can be diversified and risk is reduced on the portfolio level.<br />

We suggest exploiting temporary trends of individual futures by increasing leverage. At the same time, portfolio<br />

risk is limited by dynamically optimizing the superposition policy (i.e. the joint decision about stop loss and<br />

leverage level) that is to be applied to individual futures for the next period. The choice of the superposition strategy<br />

depends on the investor’s utility function, current wealth, remaining periods until planning horizon, and the expected<br />

return distribution for available investment opportunities. Our approach differs from the conventional use of stop<br />

loss policies. Stop losses are usually applied to determine when to exit a risky investment in favor of a riskless<br />

alternative investment. In our setting, we also use stop loss limits to determine conditions to close individual<br />

investment positions. In contrast to common use, we do not employ stop loss rules to determine when the overall<br />

strategy is to be replaced by a riskless investment. Stop loss rules for an overall investment strategy are expected to<br />

248


e rewarding only if there is substantial time-series dependence present of the overall strategy returns. However, for<br />

many settings, it is not obvious why past performance of an overall strategy should indicate future performance,<br />

particularly in the case of low data frequencies. Rather, we treat risk management for futures investment strategies<br />

on an overall level analogously to a dynamic asset allocation problem and the risk management of a futures<br />

investment strategy consists in the different decision variables: in a (basic) dynamic asset allocation problem, an<br />

investor decides on the weights of risky and a riskless asset. In contrast, for risk management, the decision variables<br />

are superposition policies that are applied to the basic investment strategies and thus generate new return profiles. In<br />

our empirical application, we will explore if our approach succeeds in improving an investor’s utility by applying<br />

superposition rules to basic investment strategies identified by mechanical trading rules.<br />

The rest of the paper is organized as follows. Section 2 establishes the general framework for analyzing the<br />

portfolio-wide impact of our superposition policies on mechanical investment strategies in futures markets. Chapter<br />

3 provides empirical results for the framework applied to investment strategies in commodity futures markets.<br />

Section 4 concludes.<br />

2 General framework<br />

2.1 Selection rules for basic futures investment strategies<br />

Consider a futures investment opportunity set � : � { 1,...,<br />

S}.<br />

A generic mechanical trading rule, that maps<br />

signals � for the S futures into a vector of portfolio weights g* : � � w is applied at equally spaced points in<br />

time belonging to a finite set � � { 0,...,<br />

t �1}<br />

. Decisions are indexed with the set � � { 1,...,<br />

J}<br />

. That is, the jth<br />

t � . The length of each trading period is � t . At each review period t � � ,<br />

decision, j � � , is made at time �<br />

j<br />

a mechanical rule g * �� �t<br />

� identifies the market exposure (long, neutral or short) of the futures of the opportunity<br />

set � . The portfolio constituents are chosen along with the vector of respective portfolio weights. g * �� � �<br />

exploits a vector of mechanically generated signals � based on information available up to period t , that is the<br />

information set � t . The portfolio is held up until the next review period for trading, � � � � � t 1 j . Applying the<br />

dynamic decision rule, the resulting set of futures positions is denoted basic strategy (BS). Long and short positions<br />

are denominated active positions and are indexed by the set � : � { 1,...,<br />

M } � � . �t is partitioned into a set of<br />

D equally spaced points in time � : � { 1,...,<br />

D}.<br />

The step size between two subsequent points in time is � � .<br />

Without loss of generality, assume that t represents months and � denotes days. Hence, one month is partitioned<br />

into t� � t ... t .<br />

1 � �<br />

2 � Before we proceed with a description of the considered stop loss and leverage policies,<br />

D<br />

some assumptions are necessary.<br />

Assumption 1<br />

g * � � is strictly mechanical, that is, no discretionary decisions are taken and thus, the possibility for<br />

Rule � t �<br />

backtesting strategies that are produced applying g �� � �<br />

Assumption 2<br />

* is ensured.<br />

The returns on the basic strategy, generated by application of g �� � �<br />

t<br />

* , are i.i.d at monthly data frequency.<br />

Assumption 3<br />

Each investment opportunity can be bought or sold at the designated amount, i.e., there are no divisibility problems.<br />

249<br />

t<br />

j<br />

t


2.2 Superposition policies<br />

2.2.1 Stop loss policies<br />

We restrict the set of stop loss rules to trailing stops that automatically adjust upwards and trigger after cumulative<br />

: � � � � � � proceeds as follows: Close<br />

losses exceed a critical value. The rule for creating the set � � min � � max ��<br />

,...,<br />

t , t<br />

position m at time t� �1<br />

for the remaining time of the current trading interval � � �1<br />

D �<br />

performance for position m falls short of a fixed threshold � relative to the maximum within the period �t , t �<br />

if the cumulated<br />

since inception of investment position m at the previous review period t . Thus, � is a set of rules for the entire<br />

range of considered stop loss limits, � � ���<br />

�� � � �� ��.<br />

2.2.2 Leverage policies<br />

min<br />

,..., max<br />

The leverage level is a matter of how much risk an investor wants to take. Futures trading requires a relatively small<br />

amount of margin and an investor is not very constrained by the amount of initial capital committed to trading. If the<br />

leverage level is one contracts are fully collateralized and the investor takes an unlevered position, e.g., a futures<br />

contract with value $100,000 is backed by a $100,000 cash deposit. For a leverage level of 2, futures contracts are<br />

collateralized only with half the futures contract value. And a leverage of 0.5 means that futures contracts are<br />

collateralized with twice the underlying contract value. We consider the discrete set<br />

leverage levels.<br />

L : � { lmin<br />

,..., lmax<br />

} of<br />

2.3 Dynamic programming formulation<br />

2.3.1 Action space<br />

The stop loss policy and the choice of leverage have to be determined simultaneously. After applying a<br />

superposition policy, i.e. a combination of stop loss limits and leverage level to the basic strategy , the resulting<br />

strategy is labeled portfolio strategy. An action set �: � { 1,...,<br />

A}<br />

for superposition policies that can be applied to<br />

the basic strategy is generated as Cartesian product of the sets � and L , that is � � L � ��� , l�<br />

| � � � � l � L�<br />

addition, a set of alternative investments, { ret }<br />

AI : � rf , with the single element rf<br />

: . In<br />

ret as risk-free investment<br />

opportunity with constant guaranteed return, is considered. Thus, � consists of | � | � | L | �1<br />

elements. To keep<br />

computational complexity at a manageable level we have to restrict the number of feasible superposition strategies.<br />

F<br />

Therefore, the feasible action space � , with � � �<br />

F<br />

F<br />

, is equipped with a subset of � . Pruning � to � , both<br />

the number of feasible actions and the feasible actions themselves have to be determined. While the number is<br />

mainly an issue of computer capacity, the actions that are chosen to be decision variables within the dynamic<br />

programming have to be identified according to problem-specific economic criteria. This issue will be discussed in<br />

more detail in chapter 3. Backtesting the (hypothetical) returns of the whole set of portfolio strategies, with data<br />

F<br />

available up to the current period, provides decision support for the choice of � . Returns of the portfolio strategy,<br />

that is, (feasible) superposition policies are applied to the basic strategy (BS),<br />

SP BS F F<br />

F F<br />

ret : � ret �� � L : � ��� , l�<br />

| � � � � l � L ��.<br />

Hence, the feasible action set comprises : ret AI.<br />

SP<br />

F<br />

� � �<br />

All referred returns are defined as discrete returns. For commodity futures markets we refer to discrete excess<br />

returns.<br />

2.3.2 State space<br />

Wealth W constitutes the single state variable in the problem formulation. Thus, an adequate description for the<br />

stochastic evolution of { , t � 0}<br />

is to be identified. The transition equation equation for wealth is represented as<br />

W t<br />

F �W a , �<br />

�1 , �1<br />

� Wt g t t �t in the most general notation. Hence, the wealth in the following period is a function of<br />

250<br />

1<br />

�<br />

,


current wealth W t , the action<br />

according to �<br />

t�1<br />

F<br />

t<br />

a �<br />

F<br />

� , and t�1<br />

SP F<br />

W � � ret �a , ��<br />

t � 1 t�1<br />

t t�1<br />

� representing randomness. More concretely, wealth evolves<br />

W � . The returns of the portfolio strategy for the next period,<br />

SP<br />

F<br />

rett� 1 , depend on a t , that is the simultaneous choice of stop loss rule and leverage level. Of course, these<br />

returns are random. To be more precise about � , we resort to a resampling scheme to simulate the data generating<br />

process for<br />

SP<br />

ret . The procedure is applied in order to avoid errors due to possible misspecifications using<br />

parametric estimations for the stochastic process for<br />

SP<br />

ret . It is crucially important to notice that the resampled<br />

monthly historical returns are (value-weighted) overall portfolio returns. Therefore, it does not matter which<br />

g * � � at any review period. Instead, the<br />

commodity futures had been chosen by the mechanical selection rule � �<br />

individual portfolio constituents are regarded as entirely interchangeable and returns of the portfolio strategy are<br />

g * � � . Hence, the consequences of<br />

defined as the outcome of applying the mechanical trading rule � t �<br />

Assumption 1 and Assumption 2 become evident: returns generated applying g �� � �<br />

t<br />

* have to be i.i.d on a<br />

monthly data frequency to carry out the analysis with wealth as single state variable because, for this case, there is<br />

no time-series dependence of returns on the portfolio level. The portfolio-level view constitutes an enormous<br />

simplification for our analysis because time series characteristics for returns of individual futures (and correlations<br />

with other futures) become irrelevant.<br />

2.3.3 Value Function<br />

The terminal value function at the end of the planning horizon T is stated as<br />

V<br />

T<br />

�W �<br />

T<br />

�<br />

�<br />

: � �<br />

�<br />

�W<br />

T<br />

1��<br />

WT<br />

1�<br />

�<br />

� � � ( protection _ level �W<br />

T<br />

, W<br />

T<br />

, W<br />

T<br />

�<br />

�<br />

protection _ level<br />

protection _ level<br />

The perceived risk by the investor is directly linked to wealth. Above a specified protection level, the terminal value<br />

function is of the CRRA type and the parameter � � 0 determines the level of risk aversion. Below the protection<br />

� � protection _ level �W<br />

, � � 0 to control for<br />

level, missing the target is penalized by the linear term � �<br />

downside risk. � and � for all levels of wealth have to be chosen such that<br />

1��<br />

WT<br />

1�<br />

�<br />

� WT<br />

� � � ( protection _ level �WT<br />

) . Our modeling choice to account for downside risk to account<br />

for downside risk is an alternative formulation of a chance-constrained investment problem. The capital should be<br />

invested so that a minimum return is exceeded with a probability to be specified, say 95% or 99%, and the expected<br />

return should be as large as possible. We will explore empirically whether downside risk protection can be achieved<br />

by utility maximization using the linear penalty term for a given value for � .<br />

function can be stated according to the Bellman equation:<br />

For periods t � � , the value<br />

2.3.4 Backward recursion<br />

V<br />

t<br />

F<br />

�W � f ( W , a ) � E [ V �W � | W ]}<br />

t<br />

max{ F F<br />

at<br />

��t<br />

t t<br />

t t�1<br />

t�1<br />

t<br />

� � .<br />

The dynamic optimization problem can be solved using backward dynamic recursion, conditioning on the level of<br />

wealth. According to the specified state transition for wealth, we obtain for t � T �1,...,<br />

1<br />

251<br />

T<br />

t


V<br />

F<br />

SP F<br />

�W � � f ( W , a ) � � E [ V ( W � ( 1�<br />

ret ( a , � ))) | W ]} .<br />

t<br />

t<br />

max{ F F<br />

at<br />

��t<br />

t t<br />

t t�1<br />

t<br />

t�1<br />

t t�1<br />

t<br />

SP<br />

ret t<br />

*,<br />

For each t � � , two different sets of monthly returns,<br />

, are resampled: one in-sample set<br />

in<br />

St of size<br />

in<br />

| St | and one out-of-sample set<br />

out<br />

St out<br />

of size | St in in<br />

| and � � St<br />

out out<br />

and � � St<br />

. Both the in-sample and<br />

the out-of-sample sets are generated on the basis of the same historical daily portfolio returns. Thus, in-sample and<br />

out-of sample refer to different (due to resampling) monthly returns from the same pool of daily returns. The<br />

sample S includes the in-sample returns used for generating the single-period utility maximization problems and<br />

in<br />

t<br />

the sample S entails the monthly out-of-sample returns employed for evaluating the obtained solution. Using two<br />

out<br />

t<br />

different samples, one for optimizing and the other for evaluating, prevents from optimization bias. In the following,<br />

* *<br />

�<br />

*<br />

V � refers to the out-of-sample estimate of the value function.<br />

V � � refers to the in-sample estimate, whereas � �<br />

t<br />

t<br />

i<br />

WT 1<br />

in<br />

T<br />

Starting at period T �1,<br />

wealth is parameterized into I discrete wealth levels � . We solve the problem in<br />

period T �1<br />

for each level of wealth, i.e. for each specified grid point, using sample S � 1 , and obtain the optimal<br />

F*<br />

superposition policies, aT � 1 . The optimal superposition policy tells us which combination of leverage level and stop<br />

loss of the feasible action space maximizes the expected utility of the next month. The obtained solutions are<br />

evaluated using Monte Carlo samples with the out-of-sample returns to estimate the expected utility of the single-<br />

i<br />

period utility maximization problem of each period. For each level of wealth, WT � 1 , a corresponding value for the<br />

i<br />

out-of-sample value function, T � T � W V *<br />

�1 �1<br />

is obtained. The value function of period T �1<br />

is the induced value<br />

function for the T � 2 single-period optimization, and the procedure is repeated until all optimizations in period 1<br />

are done. Finally, in period 0, the initial wealth is known and the final optimization using the period 1 value<br />

function as implied utility function is conducted. As a final result, we receive a sequence of optimal policies<br />

* *, F , i i *, F , i i<br />

depending on each possible state for each time period (and level of wealth), � � { a0 �W 0 �,..., aT<br />

�1<br />

�W T �1<br />

�}. In<br />

each period of the backward recursion, a different independent sample of large size is used for evaluation to avoid<br />

* *<br />

V � would suffer from an optimization<br />

optimization bias. Depending on the sample size, the in-sample estimate t � �<br />

bias that would be carried forward between stages, whereas the out-of-sample estimate<br />

* ��� V of the portfolio<br />

decision represents an independent evaluation without any optimization bias. Thus, the sequential optimizationevaluation<br />

procedure is an important part of the solution algorithm. We simulate a large number of paths of the<br />

controlled state variable (wealth) to evaluate out-of-sample paths for the evolution of wealth from the inception of<br />

the investment in period 0 until the end of the planning horizon in period T . In the following section, we apply our<br />

method for investment strategies in commodity futures markets and discuss the results.<br />

3 Empirical application<br />

3.1 Mechanical selection rules for commodity futures<br />

Gorton & Rouwenhorst (2006) introduce an active commodity futures strategy that seeks to exploit the term<br />

structure of commodity futures prices by taking long positions in backwardated contracts and short positions in<br />

contangoed ones. They exploit the term-structure signals of 12 commodities and implement a simple long-short<br />

strategy that buys the 6 most backwardated commodities and shorts the 6 most contangoed commodities. Miffre &<br />

Rallis (2007) employ momentum signals and tactically allocate wealth towards the best performing commodities<br />

and away from the worst performing ones. At the end of each month, they sort futures contracts into quintiles based<br />

on their average return over the previous R months (ranking period). The futures contracts in each quintile are<br />

equally weighted. The performance of both the top (winner) and bottom (loser) quintiles is monitored over the<br />

subsequent H months (holding period). The resulting R-H momentum strategy buys the winner portfolio, shorts the<br />

loser portfolio and holds the long-short position for H months earning considerable excess returns. Shen, Szakmary<br />

& Sharma (2010) provide further empirical evidence for abnormal returns of momentum strategies in commodity<br />

252<br />

t


futures markets for intermediate holding periods. Fuertes, Miffre & Rallis (2010) provide an empirical study<br />

combining both momentum and term structure signals. They proceed as follows: first, they compute the degree of<br />

backwardation and contango at the end of each month and the 1/3 breakpoints to split the cross section of futures<br />

contracts into 3 portfolios, labeled low, medium and high. They then sort the commodities in the high portfolio into<br />

2 sub-portfolios (high-winner and high-loser) based on the mean return of the commodities over the past R months.<br />

In effect, the high-winner and high-loser portfolios contain 50% of the cross-section that is selected with the first<br />

term-structure sort or 50% �33. 3%<br />

of the initial cross-section that is available at the end of a given month.<br />

Intuitively, high-winner is thus made of the commodities that have both the highest degree of backwardation at the<br />

time of portfolio construction and the best past performance. Similarly, they sort the commodities in the low<br />

portfolio into 2 sub-portfolios (low-winner and low-loser) based on their mean return over the past R months. Lowloser<br />

contains therefore commodities that have both the highest contango at the time of portfolio construction and<br />

the worst past performance. The combined strategy buys the high-winner portfolio, shorts the low-loser portfolio<br />

and holds this position for one month. The choice of one-month holding period (H=1) and monthly rebalancing is<br />

due to the fact that the momentum strategies with H=1 and R=1 proved to be the most profitable ones in Miffre &<br />

Rallis (2007). This finding is supported by the empirical study of Shen, Szakmary & Sharma (2007). The doublesort<br />

strategies could generate returns (net of transaction costs) clearly above the passive benchmark and also<br />

performed much better risk-adjusted. The robustness analysis of Shen et al. (2007) suggests that the superior profits<br />

of the double-sort strategies are not an artifact of lack of liquidity, transaction costs or data mining. Also, the success<br />

of their approach proves to be robust to alternative choices for ranking and holding periods and different<br />

specifications of the risk-return relationship. Despite these encouraging results, the returns of the active strategy<br />

turned out to be substantially more volatile and exhibited more downside risk. Bottom line, the term structure of<br />

commodity prices and momentum signals seem to have been valuable tools for allocation across individual<br />

commodity futures in the past.<br />

We consider four basic mechanical investment strategies based on term structure or momentum signals. For all<br />

strategies we apply one month as lookback period and one month as holding period. The choice of a one month<br />

investment period and a one month lookback period is arbitrary, as are all investment and lookback periods for<br />

momentum strategies. Strategy (a) High goes long one third of all current portfolio constituents whose term structure<br />

is most backwardated. Strategies (b)-(d) also select long or short positions for one third of the number of current<br />

portfolio constituents. Strategy (b) High-Winner double-sorts commodity futures according to the degree of<br />

backwardation and momentum and takes long positions for the chosen commodity futures. Strategy (c) Winner is<br />

only based on a momentum signal and takes long positions in the futures with the highest futures return in the<br />

previous month. Strategy (d) High-Winner-Low-Loser goes long in the same manner the High-Winner (as in<br />

strategy (b), but only half of them), the other part consists of shorted Low-Losers which are ranked on contango and<br />

lowest previous returns. They are expected to underperform, therefore they are shorted. For each strategy, all<br />

included commodity futures are weighted equally<br />

3.2 Parameterization<br />

To determine the action space � we have to define a suitable range of stop loss limits and leverage levels<br />

which are to be applied to the returns of the basic strategies (a) -(d) to obtain the portfolio strategy. The set for stop<br />

: � � � � � � is scanned for the range between � 0 and 20 . 0 � . For the extreme<br />

� min max ,...,<br />

loss limits � � � ��<br />

case of min 0 � � , investment position m is closed at the next trading day for the remaining time of the trading period,<br />

that is the end of month in our setting if the performance index (starting at value 1 at the first day of the month) falls<br />

short 0% in reference to the maximum value of the performance index since beginning of the month. Thus, a stop<br />

loss threshold � min � 0 triggers closing of position m if the performance index does not outperform the maximum<br />

reached so far in the current month. Using the stop loss limit max 0.<br />

20 � � at the other end of the spectrum, accepts a<br />

20% maximum drawdown before liquidation of the position is triggered. Note that the suffered loss can exceed the<br />

specified stop loss limit γ. Consider, for example, γ=10%, and the performance index is 1.01 at trading day δ, while<br />

the maximum value of the index equals 1.10 for this month. Thus, the position is not closed at the following trading<br />

day δ+1 because 1.01/1.10-1=-0.0818>-0.10. At the next trading day, δ+2, the index drops to 0.96. At this, point<br />

closing the future is triggered but the investor has already suffered 0.96/1.10-1=-0.1273


We scan the grid for stepwidth 0.01 from � min � 0 to � max � 0.<br />

20 , given a certain leverage level. For the set of<br />

attainable leverage levels, l �{<br />

0.<br />

5,<br />

1,<br />

2,<br />

3}<br />

is considered. The action set� thus comprises |Ψ|×|L|+1 possible actions,<br />

that is |Ψ|×|L| superposition policies and the risk free investment opportunity. To keep computation feasible, the<br />

actions considered so far are divided into feasible und unfeasible actions. This procedure is done for each possible<br />

level of wealth, l �{<br />

0.<br />

5,<br />

1,<br />

2,<br />

3}<br />

.To proceed, we backtest the suggested superposition policies with strategies (a)-(d)<br />

and backtest results are used to preselect potential superpolicies.. Table 1 shows the superposition policies that are<br />

chosen to be inputs, that is decision variables, for the dynamic programming setup. Thus, the feasible action space<br />

F<br />

� consists of 13 elements presenting a computationally tractable dimension.<br />

SP1<br />

SP4<br />

SP7<br />

SP10<br />

Leverage: 3<br />

Stop: not applied<br />

Leverage: 2<br />

Stop: not applied<br />

Leverage: 1<br />

Stop: not applied<br />

Leverage: 0.5<br />

Stop: not applied<br />

SP2<br />

SP5<br />

SP8<br />

SP11<br />

Leverage: 3<br />

Stop: 12.5%<br />

Leverage: 2<br />

Stop: 7.5%<br />

Leverage: 1<br />

Stop: 5.0%<br />

Leverage: 0.5<br />

Stop: 5.0%<br />

SP3<br />

SP6<br />

SP9<br />

SP12<br />

Leveragev:3<br />

Stop: 7.5%<br />

Leverage:2<br />

Stop: 5.0%<br />

Leverage:1<br />

Stop: 2.5%<br />

Leverage:0.5<br />

Stop: 2.5%<br />

Table 1: The table shows the feasible combinations of stop loss limits and leverage levels. Note that, for simplicity, we do not select different<br />

feasible superposition strategies for each of the considered basic investment strategies. Instead, three different stop loss limits are chosen. Two<br />

stop loss limits are selected in the range for a high ratio of average annual geometric return to the average annual standard deviation of returns,<br />

the third alternative is that no stop loss mechanism is applied. These pre-selected superposition strategies serve as the feasible actions in the<br />

dynamic programming setting.<br />

3.3 Results<br />

We evaluate the four investment strategies we proposed in the previous section with all associated superposition<br />

policies. The discrete-time finite-horizon MDP is solved for a planning horizon of T=12 months and monthly review<br />

periods. Starting with an initial wealth W 0 =$100,000, we set the protection level at $95,000. We use a step size of<br />

250 and an attainable range between $0 and $400,000. Thus the number of grid points equals 1601 and W ∈<br />

{$0,...$,400,000}. Such a dynamic programming framework requires a maximum level of wealth to be considered.<br />

All other levels of wealth exceeding this one are assumed to have the same utility. This causes a conservative bias as<br />

this level is approached. We assume that the amount of starting capital W₀ is $100,000, optimization results become<br />

unreliable as we approach $400,000 because exceeding this amount is not of any additional use in the model. The<br />

problem is aggravated the more periods are still to go. Therefore we have to be cautious interpreting optimization<br />

results depending on the starting capital and the time horizon when the state of wealth is above a certain region.<br />

However, a maximum value of $400,000 means wealth would have to quadruplicate within a year. Thus, our choice<br />

is not very restrictive. An analogous problem arises when wealth approaches $0. We set the risk aversion parameters<br />

to � � 0.<br />

01for<br />

the upper part and � � 100 for wealth levels below the protection level. To quantify how much upside<br />

potential we give up in favor of the downside protection we contrast our results with the parameter combination<br />

� � 0 and � � 0 (i.e. an entirely-risk neutral investor). In order to get evidence of the added value of the<br />

superposition strategies, we compare the resulting distributions when all superposition policies are considered, that<br />

is { 1,...,<br />

A}<br />

a F � with � 13.<br />

F<br />

A The base case only allows to select unleveraged positions without stop loss, i.e. the<br />

controls a �{<br />

7,<br />

10,<br />

13}<br />

are available. Superposition 7 is Leverage 1 and no stop loss, superposition 10 is Leverage<br />

0.5 and no stop loss, 13 represents the risk free rate with annualized return of 3%. Tables 2-5 report the results for<br />

each of the four basic investment strategies after applying the superposition policies. 99% (95%) means the level of<br />

wealth that is exceeded in 99% (95%) of (5,000) out-of-sample simulation runs.<br />

254


High-Strategy<br />

99% Wealth 95% Wealth Expected Wealth<br />

Superposition<br />

� � 0 , � � 0<br />

$64,500 $80,500 $138,410<br />

� � 100 , � � 0.<br />

01<br />

$ 94,750 $95,500 $119,950<br />

Base Case<br />

� � 0 , � � 0<br />

$72,500 $81,500 $107,180<br />

� � 100 , � � 0.<br />

01<br />

$96,250 $97,750 $103,640<br />

Table 2: Out-of-sample simulation results for High-Strategy<br />

High-Winner-Strategy<br />

99% Wealth 95% Wealth Expected Wealth<br />

Superposition<br />

� � 0 , � � 0<br />

$78,500 $92,750 $150,260<br />

� � 100 , � � 0.<br />

01<br />

$ 94,250 $95,250 $136,440<br />

Base Case<br />

� � 0 , � � 0<br />

$70,500 $81,250 $113,660<br />

� � 100 , � � 0.<br />

01<br />

$95,250 $96,750 $106,790<br />

Table 3: Out-of-sample simulation results for High-Winner-Strategy<br />

Winner-Strategy<br />

99% Wealth 95% Wealth Expected Wealth<br />

Superposition<br />

� � 0 , � � 0<br />

$68,500 $82,500 $135,690<br />

� � 100 , � � 0.<br />

01<br />

$ 94,750 $95,500 $121,510<br />

Base Case<br />

� � 0 , � � 0<br />

$81,375 $89,250 $116,660<br />

� � 100 , � � 0.<br />

01<br />

$94,750 $96,250 $110,640<br />

Table 4: Out-of-sample simulation results for Winner-strategy<br />

High-Winner-Low-Loser-Strategy<br />

99% Wealth 95% Wealth Expected Wealth<br />

Superposition<br />

� � 0 , � � 0<br />

$70,250 $81,500 $117,190<br />

� � 100 , � � 0.<br />

01<br />

$ 94,875 $96,500 $107,380<br />

Base Case<br />

� � 0 , � � 0<br />

$75,000 $82,250 $106,690<br />

� � 100 , � � 0.<br />

01<br />

$96,250 $98,250 $103,750<br />

Table 5: Out-of-sample simulation results for High-Winner-Low-Loser-Strategy<br />

255


The results show strong evidence in favor of the superposition policies. For each of the investment strategies, we<br />

receive a considerable increase for the expected wealth, both for the risk controlled case and for the risk-neutral<br />

case. As expected, we find that downside protection is not for free. Protecting the downside, we give up a fairly high<br />

portion of upside chance.<br />

4 Conclusion<br />

To the best of our knowledge, the impact of applying stop-loss rules and leverage for futures investments has not yet<br />

been subject to analysis. Our framework can be thought of as a kind of meta-strategy for determining optimal risk<br />

policies for futures investment strategies. Decisions are identified for both individual contracts and the portfoliowide<br />

risk management. The suggested procedure is designed to capture temporary trends by employing leverage, to<br />

stop losses by making use of time-series dependence to limit risk and to diversify idiosyncratic risk of individual<br />

futures contracts. Although our framework is rather simple, it allows for flexible utility functions and any return<br />

distributions. The portfolio-level view for risk management facilitates analysis and allows considering an arbitrarily<br />

large investment opportunity set. Empirical results for investment strategies for commodity futures markets are<br />

encouraging and show the possible value of the framework. In particular, the proposed framework limits the<br />

downside risk. Extending the framework or applying more elaborated trading signals are promising areas of future<br />

research.<br />

5 References<br />

Basu, D. & Miffre, J. (2009). Capturing the risk premium of commodity futures. Working paper, EDHEC Business<br />

School.<br />

Erb, C., & Harvey, C. (2006). The strategic and tactical value of commodity futures. Financial Analysts Journal 62,<br />

69-97.<br />

Fuertes, A.-M., Miffre, J. & Rallis, G. (2010). Tactical Allocation in Commodity Futures Markets: Combining<br />

Momentum and Term Structure Signals. Journal of Banking and Finance 34, 2530-2548.<br />

Gorton, G. & Rouwenhorst, K. (2006). Facts and fantasies about commodity futures. Financial Analysts Journal 62,<br />

86-93.<br />

Gorton, G., Hayashi F. & Rouwenhorst, K.-G. (2007). The Fundamentals of Commodity Futures Returns. Yale ICF<br />

Working Paper No. 07-08.<br />

Locke, P. & Venkatesh, P. (1997). Futures market transaction costs. Journal of Futures Markets 17, 229-245.<br />

Miffre, J. & Rallis, G. (2007). Momentum strategies in commodity futures markets. Journal of Banking and Finance<br />

31, 1863-1886.<br />

Kaminski, K.M. & Lo, A.W. (2008). When do stop- loss rules stop-loss? Working paper, Swedish Institute for<br />

Financial Research.<br />

Schneeweis, T., Kazemi, H. & Spurgin, R. (2008). Momentum in Asset Returns: Are Commodity Returns a Special<br />

Case? The Journal of Alternative Investments 10, 23-46.<br />

Pirrong, C. (2005). Momentum in futures markets. Working paper. University of Houston.<br />

Shen, Q., Szakmary, A. & Sharma, S. (2007). An examination of momentum strategies in commodity futures<br />

markets. Journal of Futures Markets 27, 227-249.<br />

Shen, Q., Szakmary, A. & Sharma, S. (2010). Trend-following trading strategies in commodity futures: A reexamination.<br />

Journal of Banking and Finance 34, 409-426.<br />

256


THE IMPACT OF INTERNATIONAL MARKET EFFECTS AND PURE POLITICAL RISK ON THE UK,<br />

EMU AND USA OIL AND GAS STOCK MARKET SECTORS<br />

John Simpson, School of Economics and Finance, Curtin University, Perth, Western Australia.<br />

Email: simpsonj@cbs.curtin.edu.au www.cbs.curtin.edu.au<br />

Abstract. A key question in European integration is whether or not the global oil and gas market sector (over the period which<br />

includes the oil price hikes of 2001 to late 2007 and the global financial crisis from mid 2008 to early 2009) has been impacted to a<br />

greater or lesser extent by major Western country oil and gas stock market sectors. An examination is undertaken of data from those<br />

sectors in the USA, the EMU and the UK as well as political, social and legal data (embodied in political risk ratings) for the USA, the<br />

UK and the key economies of the EMU (that is, France, Germany and Italy). This short study uses analysis of both unlagged and<br />

lagged data in multivariate models to test these relationships and finds a closer relationship politically and financially between the<br />

USA and the UK than between the EMU and the UK markets. The EMU and the major EMU countries considered separately in oil<br />

and gas market sector movements and political risk ratings movements have very little financial and political influence on the<br />

movement of the global oil and gas market sector over the period.<br />

Keywords: oil and gas, market sectors, country risk, political risk, integration, USA, UK, EMU, multivariate models.<br />

JEL Classification: F15, F59<br />

1. Introduction<br />

According to theory and the Law of One Price as revisited by Asche et al, (2000), in an integrated market, prices on<br />

homogenous goods from different producers and suppliers should move together. Price differentials should only<br />

indicate differences in transportation costs and quality. These price differentials for energy should flow through to<br />

oil and gas stock markets sectors and be reflected in oil and gas share returns. They should also be reflective of the<br />

degree of oil and gas stock market integration of the major Western stock markets.<br />

There is a connection between market integration and political risk in energy markets. There is also a<br />

relationship between energy prices, political risk and prices in energy stock market sectors. For example, Asche et al<br />

(2000) find that cointegration tests showed different border prices for gas to Germany moved proportionally over<br />

time. This indicated integration of the German gas market. They also studied whether or not there were large price<br />

differences between gas from Norwegian, Dutch and Russian exporters. They find differences in mean prices<br />

(Russian gas was cheaper) and the reasons for the price differences were ascribed to differences in volume flexibility<br />

and perceived political risk. The question in the case of EMU countries is whether or not energy market integration<br />

in the leading EMU countries energy sectors and political risk changes in EMU countries have flowed over to global<br />

oil and gas stock market sector integration.<br />

It is logical that oil and gas listed companies need to be combined as a market sector. There is a proven strong<br />

connection between oil prices and gas prices and this translates to a need to include energy companies in one<br />

sectorial stock price index. Most researchers agree that the price of oil has something to do with the price of gas (For<br />

example, Okugu, 2002; Mazighi, 2005 and Eng, 2006). Some also agree that other energy prices, such as coal are<br />

related to oil and gas prices (For example, Bachmeier, 2006; Pindyck, 1999). If this is the case in North America as<br />

well as in Continental Europe, then an interim global gas pricing model remains possible, with oil and coal prices<br />

partial forecasters of gas prices. This direction of energy markets should also be indicated in the degree of<br />

integration of American and European oil and gas stock market sectors.<br />

The issues are in respect to the integration of global oil and gas stock market sectors 1. Are the factors driving<br />

the major Western oil and gas stock market sectors over the period of study mainly political or economic and<br />

financial (that is, stock market related) in nature? 2. Is the EMU oil and gas market a greater political and legal force<br />

impacting the main global oil and gas markets over the period of study compared to the UK? 3. Is the UK oil and<br />

gas stock market sector more economically and politically integrated with the US and global markets than it is with<br />

the EMU? In respect to the last issue it needs to be remembered that whilst the UK is a member of the European<br />

Union it has not fully committed to membership of the EMU.<br />

257


A definition of political risk is required for this study. Political risk is the willingness of countries to service<br />

their external commitments (For example, Bourke & Shanmugam, 1990; Cantor & Packer, 1996). This is influenced<br />

by human, cultural, social and legal factors that provide a subjective quantification of influences such as, the degree<br />

of corruption, the history of law and order and the quality of the bureaucracy (ICRG, 2009). It is suggested that<br />

political risk ratings in Western economies have become more volatile over the period since “9/11” terrorist attacks,<br />

the Iraqi war, the corporate governance issues in the USA related to for example, Worldcom and Enron, and more<br />

recently the finance and banking governance issues that led to the global financial crisis in the USA, the UK and the<br />

major Western European countries (For an indication of the volatility in political risk ratings over the full period of<br />

the study, see Appendix 2).<br />

Political risk ratings are used by financial economists as a management tool for assessing economic, financial<br />

and political riskiness in doing business with different countries either at a macro or a microeconomic level (For<br />

example they assist banks lending international to ascribe credit risk premia to arrive at a market interest rate).<br />

Political risk ratings are part of country or sovereign risk ratings, with the other two components of economic risk<br />

and financial risk remaining a measure of a country’s ability to meet external obligations.<br />

Country stock market indices, sectorial or otherwise, have proven to be reliable indicators of economic and<br />

financial conditions. Major Western stock markets sectors in oil and gas have also be quite volatile over the period<br />

from 2001 to 2008 for similar reasons relating to oil prices, US corporate governance issues and the global financial<br />

crisis. Oil and gas market data also reflect global economic and financial conditions and may also be impacted by<br />

political risk factors due to the importance of oil and gas as an essential commodity and because the OPEC cartel is<br />

comprised of developing countries with high levels of country and political risk.<br />

2. Data<br />

Global and country oil and gas market monthly indexed data are collected from the DataStream database covering<br />

the period June 2001 to December 2008 for the world, the USA the UK and the EMU. Over a similar period<br />

monthly political risk ratings are gathered from the ICRG (2009) for the UK, The USA and the major EMU<br />

countries in France, Germany and Italy.<br />

The world oil and gas stock market price index values are reported by Datastream who base their series on stock<br />

exchange oil and gas price indices that commonly use a representative sample of publicly listed oil and gas related<br />

companies in each country, with the stock prices reflected in the index converted into US Dollars at current<br />

exchange rates. The companies included in the index generally represent around 85% of the volumes traded in the<br />

country oil and gas markets. The index is regularly re-assessed (at least every quarter) to identify changes in the<br />

trading volumes of each represented company share. Then a new portfolio is compiled, with new weightings based<br />

on the changes in trading activity in each share.<br />

The companies represented in the index commonly represent around 70% of the total oil and gas stock market<br />

capitalisation of listed companies in each market. The indices generally reflect information that has been updated<br />

daily for the morning following the reference day and may be regarded as an important global economic indicator,<br />

reflective in part of global supply and demand conditions.<br />

3. The Model<br />

Changes in price indices and political risk ratings are studied in a single period (lags excluded) ordinary least<br />

squares regression format as follows:<br />

�P<br />

OGW<br />

T<br />

��7(<br />

�SR<br />

Where:<br />

��<br />

OGW<br />

t<br />

GERMt<br />

��1(<br />

�P<br />

OGUK<br />

t<br />

) ��8(<br />

�SR<br />

) ��2(<br />

�P<br />

ITALt<br />

) �e<br />

OGW<br />

t<br />

OGUSA<br />

t<br />

) ��3(<br />

�P<br />

OGEMU<br />

t<br />

) ��4(<br />

�SR<br />

USAt<br />

) ��5(<br />

�SR<br />

) ��6(<br />

�SR<br />

FRAt<br />

.......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... ... 1)<br />

258<br />

UKt<br />

)


�POG 's<br />

Represent the changes in oil and gas price index values for the world (the dependent variable), the UK, the<br />

USA and the EMU indices respectively.<br />

�SR' s represent the changes in political risk ratings for the USA, the UK, France, Germany and Italy respectively.<br />

� is the regression intercept for the world oil and gas regression at time t.<br />

OGEMU<br />

t<br />

� ’s represent the regression coefficients for each of the above independent variables.<br />

It is also useful, in respect to issue number 3 to provide a basic study on UK political risk relationships with the<br />

USA and the key EMU countries. Another single period ordinary least squared multiple linear regression model is<br />

tested as follows.<br />

�SR � � � �1(<br />

�SR<br />

� � 2(<br />

�SR<br />

) � � ( �SR<br />

) � e<br />

UKt<br />

4. Findings<br />

UKt<br />

FRAt<br />

GERM t<br />

USAt<br />

UKt<br />

.......... ... 2)<br />

An indication of the volatility in level series oil and gas stock market sectors for the world, the USA, the UK, and<br />

the EMU is provided in Appendix 1. The study moves to first differences to remove serial correlation problems in<br />

the regression errors. The findings of this study of Equation 1) in first differences (price changes and political risk<br />

ratings changes) are reported in Table 1.<br />

Regression statistics Value<br />

Adjusted R Square 0.9187<br />

Durbin Watson test statistic 2.2092<br />

t-statistic UK oil and gas market index changes 13.9038<br />

t-statistic US oil and gas market index changes 19.8146<br />

t-statistic political risk changes UK -2.4572*<br />

Note: Variable are significant at the 1% level except for *, which is significant at the 5% level.<br />

Table 1: Regression results for the world oil and gas market index changes<br />

The results show that there is not a problem with serial correlation in this regression as the Durbin Watson<br />

(DW) test statistic is significantly greater than 2. The results of the model may be relied upon. The explanatory<br />

power of the model is strong with an adjusted R Square value of 0.9187 (91.87%). The t-statistics show that the<br />

EMU oil and gas market does not possess a significant relationship with global oil and gas markets, but the US<br />

market and then the UK market are significant parts of the explanatory power of the model (t statistics at 19.8146<br />

and 13.9038, which are significant at the 1% level).<br />

The t-statistics also indicate that political risk changes in the UK are an explanatory variable in this model<br />

(where the t-statistic at -2.4572 is significant at the 5% level), but its contribution to the explanatory power is<br />

substantially less than the changes in the oil and gas market indices in the UK and the USA. Changes in political risk<br />

ratings in Germany, France and Italy are not significant explanatory variables.<br />

Logically there is a positive relationship between the UK and the USA oil and gas price changes and those of<br />

the world oil and gas market. As returns increase in one market so they do in the other. There is a negative<br />

relationship between changes in political risk in the UK and the changes in the world oil and gas market index. That<br />

is, as political risk ratings decrease in the UK (that is, as political risk reduces) the world oil and gas prices and<br />

returns increase. This is not in accordance with the risk/return relationships in financial economics theory but it may<br />

indicate risk aversion on the part of the UK political environment when oil and gas markets are considered. It may<br />

be that when prices in the volatile oil and gas sector increase, the UK risk analysts perceive an improving domestic<br />

political environment.<br />

259


Table 2 shows the results of a system that incorporates the interaction of political risk changes in the UK with<br />

political risk changes in the major EMU countries (France, Germany and Italy) and the USA.<br />

Statistic Value<br />

Adjusted R Square 0.0923<br />

Durbin Watson test statistic 1.9684<br />

t-statistic USA political risk changes 3.8126<br />

Standard error of regression 0.7788<br />

Note: All statistics are significant at the 1% level.<br />

Table 2: Regression results for UK political risk ratings changes<br />

These results indicate a model that is clearly not fully specified, but nevertheless has an explanatory power of<br />

around 9.23% (adjusted R Square value is 0.0923, which is significant at the 1% level). This number may be relied<br />

upon as DW test statistic is significantly greater than 1.5 and serial correlation in the regression errors is not a<br />

problem. Around 9% of the variance of the changes in the UK political risk is explained predominantly by the<br />

changes in political risk in the USA (the t-statistic for USA political risk changes at 3.8126 are significant at the 1%<br />

level). France, Germany and Italy political risk changes are not significant explanatory variables in the system<br />

studied at any level of significance. The positive relationship between the changes in USA political risk ratings and<br />

those of the UK indicates that as political risk in the USA increases, political risk in the UK also increases.<br />

Equation 1) is respecified (in non stationary level series) into a VAR with all variables in that equation<br />

optimally lagged. On a VAR Stability Condition test it the findings are that the VAR satisfies the test with no root<br />

lying outside the unit circle. The VAR lag order selection criteria indicates, through the Likelihood Ratio and<br />

Schwartz Criteria that the optimal lag is 1-2 months. The VAR in the specified form with the world oil and gas<br />

market treated endogenously has strong explanatory power with an adjusted R Square value of 0.8720 significant at<br />

the 1% level. VAR based Granger causality test Chi Square statistic over a 2 month lag verify that there is no<br />

significant evidence at the 10% level of one-way causality running from the independent variables to the world oil<br />

and gas market.<br />

There is evidence at the 1% level that all variables in the model influence the UK oil and gas sector (when the<br />

UK is treated endogenously) and at the 10% level all variables influence the US oil and gas market (when the US oil<br />

and gas market is treated endogenously). When the EMU oil and gas market is treated endogenously there is also<br />

Granger causality running from the other variables at the 10% level of significance. When each country political risk<br />

variable is treated endogenously all of the other variables in each case are shown to influence that variable at levels<br />

of significance ranging from 1% to 10%.<br />

Pair-wise Granger causality tests on two month lags were also run on changes in the oil and gas market indices<br />

and changes in the political risk ratings. Significance levels of F statistics up to the 10% level were observed.<br />

Granger causality runs from the world oil and gas market changes to the UK market changes (at 1%), from the US<br />

market to the UK market (at 1%). There is dual causality between political risk changes in the UK and changes in<br />

the US oil and gas market. The changes in political risk from the UK having slightly greater statistical significance<br />

(at 5% compared to 10%). Moreover, changes in the oil and gas markets in the USA Granger cause changes in<br />

political risk in Germany (at the 5% level); Changes in political risk in the UK Granger cause changes in the EMU<br />

oil and gas market (at the 10% level). Changes in political risk in the UK Granger cause changes in political risk in<br />

France (at the 10% level).<br />

There is dual Granger causality at the 10% level between changes in political risk in the UK and changes in<br />

political risk in Germany (the relationship is stronger running from Germany to the UK). Changes in the world oil<br />

and gas market Granger cause changes in political risk in Germany (at the 5% level). It is however significant at the<br />

10% level that changes in the EMU oil and gas market Granger cause changes in the UK oil and gas market. Overall<br />

the results again demonstrate that the relationships between country oil and gas market changes and political risk<br />

ratings changes are stronger between the world oil and gas markets and the USA and UK markets and USA and UK<br />

political risk ratings and oil and gas markets.<br />

260


5. Conclusion<br />

It is concluded from this short analysis that, in unlagged data, the oil markets of the US and the UK, in that order,<br />

have a stronger relationship with each other and with the global oil and gas market than with the EMU. To a lesser<br />

extent political risk changes in the UK have a small, but significant relationship with the global oil and gas market.<br />

The model that includes these variables has strong explanatory power. The political influence of the major EMU<br />

countries (That is, France, Germany and Italy) on the global oil and gas market is not statistically significant.<br />

When a VAR is introduced with optimally lagged data the strength of the explanatory power of the model is<br />

confirmed. When both pair-wise Granger causality and VAR based causality tests are applied it is confirmed that the<br />

stronger relationships are between the world oil and gas market and those in the USA and the UK. The causality<br />

tests also indicate stronger causal relationships between the USA and the UK in both oil and gas markets and in<br />

political risk ratings. The major relationships in political risk in unlagged data, when political risk ratings only are<br />

considered, are between the US and the UK.<br />

All of this provides evidence that during a period of volatility over the past 7 years in both oil and gas markets<br />

and in political environment, the USA and the UK oil and gas markets are closer to the world market than the EMU.<br />

In addition the UK has been closer to the USA than it has been to the major EMU economies, and might indicate<br />

that, from the view point of energy stock markets sectors, that the UK is understandably not yet fully committed<br />

either politically or economically to take the next step in European integration by formally joining the EMU.<br />

6. References<br />

Asche, F., Osmundsen, P., and Tveteras, R., (2000) “European Market Integration for Gas? <strong>Volume</strong> Flexibility and<br />

Political Risk”, CESifo Working Paper, Number 358, November.<br />

Asche, F., (2006), “The UK Market for Natural Gas, Oil and Electricity: Are the Prices Decoupled?” The Energy<br />

Journal, <strong>Volume</strong> 27, Issue 2, 27-40.<br />

Bachmeier, L J., (2006), “Testing for Market Integration Crude Oil, Coal and Natural Gas”, The Energy Journal,<br />

<strong>Volume</strong> 27, Issue 2, 55-72.<br />

Bourke, P., and Shanmugam, B., 1990, “Risks in International Lending”, in Bourke (Ed), An Introduction to Bank<br />

Lending, Sydney: Addison-Wesley Publishing.<br />

Cantor, R., and Packer, F., 1996, “Determinants and Impact of Sovereign Credit Ratings”, FRBNY Economic Policy<br />

Review, October 37-54.<br />

Eng, G., (2006) “A Formula for LNG Pricing”, Ministry for Economic Development New Zealand,<br />

http://www.med.govt.nz/Templates/multipageDocumentTOC___23939.aspx<br />

ICRG. (2000-2009), International Country Risk Guide, The PRS Group Inc.<br />

Mazighi, Ahmed El Hashemite, (2005), “Henry Hub and National Balancing Point Prices: What Will be the<br />

International Gas Price Reference”, Organisation of Petroleum Exporting Countries, Research Paper,<br />

September.<br />

Okugu, B E., (2002), “Issues in Global Natural Gas: A Primer and Analysis”, IMF Working Paper, Number 40,<br />

February.<br />

Pindyck, R. S., (1999) “The Long-Run Evolution of Energy Prices”, The Energy Journal, <strong>Volume</strong> 20, Number 2,<br />

pp1-27.<br />

261


Appendix 1<br />

840<br />

800<br />

760<br />

720<br />

OGW<br />

680<br />

01 02 03 04 05 06 07 08<br />

5,000<br />

4,750<br />

4,500<br />

4,250<br />

4,000<br />

3,750<br />

OGUK<br />

3,500<br />

01 02 03 04 05 06 07 08<br />

800<br />

760<br />

720<br />

680<br />

OGUS<br />

640<br />

01 02 03 04 05 06 07 08<br />

5,000<br />

4,000<br />

3,000<br />

2,000<br />

OIGEMU<br />

1,000<br />

01 02 03 04 05 06 07 08<br />

Where OGW, OGUS, OGUK, OIGEMU are the price indices for the oil and gas sector of the stock markets for the<br />

world, the USA, the UK and the EMU.<br />

262


Appendix 2<br />

Note: PRUK, PRUSA, PRFRAN, PRGERM, PRITAL are the political risk ratings for the UK, USA, France,<br />

Germany and Italy. The variable was not a significant explanatory variable of UK political risk and is omitted from<br />

Equation 2).<br />

24<br />

20<br />

16<br />

12<br />

PRUK<br />

8<br />

01 02 03 04 05 06 07 08<br />

26<br />

24<br />

22<br />

20<br />

18<br />

16<br />

01 02 03 04 05 06 07 08<br />

36<br />

32<br />

28<br />

24<br />

20<br />

PRFRAN<br />

PRITAL<br />

16<br />

01 02 03 04 05 06 07 08<br />

25.0<br />

22.5<br />

20.0<br />

17.5<br />

15.0<br />

12.5<br />

PRUSA<br />

10.0<br />

01 02 03 04 05 06 07 08<br />

20<br />

18<br />

16<br />

14<br />

12<br />

PRGERM<br />

10<br />

01 02 03 04 05 06 07 08<br />

263


THE PROGRESS OF NATURAL GAS MARKET LIBERALISATION IN THE UK AND USA: DE-<br />

COUPLING OF OIL AND GAS PRICES<br />

John Simpson, School of Economics and Finance, Curtin University, Perth, Western Australia.<br />

Email: simpsonj@cbs.curtin.edu.au<br />

www.cbs.curtin.edu.au<br />

Abstract. According to the theory of market liberalisation, oil and gas prices should de-couple as de-regulation of natural gas markets<br />

is progressed. This paper expands previous de-coupling studies (primarily, the UK study by Panagiotidis and Rutledge, 2007) and<br />

compares the degree of de-coupling in the USA and the UK in differing periods of oil price volatility. Regression, cointegration and<br />

causality methodology reveals that progress has been made in both the US and the UK in de-regulation with perhaps stronger evidence<br />

of de-coupling of oil and gas markets in the UK, but it cannot be said that the nexus between oil and gas prices in either country has<br />

been severed when long-term relationships between the variables are considered. As a final comment there is evidence in the shortterm<br />

(particularly in the US) of the importance of gas futures markets as drivers of spot gas prices thus re-emphasising the importance<br />

of price expectations that embody country specific economic forecasts, and seasonality and storage factors.<br />

Keywords: Oil prices, gas prices, cointegration, vector autoregressive models, de-coupling, de-regulation.<br />

JEL classification codes: C22; C52; O13; Q43.<br />

1. Introduction<br />

The connection between natural gas and oil markets, the degree of integration and the corresponding volatility and<br />

similarity of volatility of these markets has been extensively studied. For example, Krichene (2002), in a supply and<br />

demand model examined world markets for crude oil and natural gas and finds that both markets became highly<br />

volatile following the oil shock of 1973. The elasticity estimates assisted in the explanation of the market power of<br />

oil producers. Price volatility responding to shocks supported demand price and income elasticities found in energy<br />

studies. Ewing et al (2002) looked at time varying volatility in oil and gas markets across markets and find that<br />

common patterns of volatility emerge that might be of interest to financial market participants.<br />

Adeleman and Watkins (2005) find a degree of stochastic similarity of movement in oil and gas reserve prices<br />

for the period 1982 to 2003 in the USA using market transaction data. They find that both oil and gas current values<br />

rose after 2000 with oil rising sharper than gas in 2003. A study by Regnier (2007) of crude oil, petroleum and gas<br />

prices over a period from 1945 to 2005 finds that these prices are more volatile than prices for 95% of products sold<br />

by domestic producers with oil prices showing greater volatility since the 1973 oil crisis.<br />

In relation to the direct connection between oil and gas prices, a party to party gas price bargaining model was<br />

expounded by Okogu (2002). The model posits that one of the principles of gas pricing is to relate the price of gas to<br />

its value in the market for oil as the major competing fuel, thus implying a strong interrelationship between oil and<br />

gas prices. The implication of such a model is that market power by State or privately owned monopolies can extract<br />

rent from consumers of gas when oil is the only other source of energy. Eng (2006), in debating the price that New<br />

Zealand should pay for its natural gas imports, alluded to the differences in the Japanese and Chinese pricing<br />

models, both of which show the accepted relationship between oil and gas prices, based on data from such sources<br />

as the Asia-Pacific Energy Research Centre and the Institute of Energy Economics of Japan. Again these export<br />

pricing models imply a strong connection between oil and gas prices.<br />

However, Serlitis and Rangel-Ruiz (2004) explored common features in North American energy markets in<br />

shared trends and cycles between oil and gas markets. The study examined Henry Hub Gas prices and WTI crude oil<br />

prices and finds de-coupling of oil and gas prices as a result of de-regulation in the USA.<br />

De-regulation and liberalisation of gas markets implies the removal of the nexus between oil and gas prices.<br />

Reforms in the institutional environment and market place encourage “gas to gas” competition. Gas and oil prices<br />

are said to de-couple and welfare benefits to consumers emerge with a fall in the price of gas. In deregulated markets<br />

the law of One Price should apply. As revisited by Asche (2000), in an integrated market, prices of homogenous<br />

264


goods from different producers and suppliers should move together. Price differentials should only indicate<br />

differences in transportation costs and quality in that homogeneous product. This implies that in a deregulated<br />

natural gas market there should only be “gas on gas” competition and there should be no connection between oil and<br />

gas prices.<br />

The USA and the UK are two important Western economies that have undertaken positive steps to deregulate<br />

their gas markets. The USA gas market is around 6 times larger than the UK market in terms of volumes consumed.<br />

De-regulation of gas markets in the USA began in 1984 with a separation of natural gas supply from interstate<br />

pipeline transportation, deregulated natural gas production and the wholesale market, and competition was<br />

introduced in interstate pipeline transportation. In the UK in 1986, the British government privatised British Gas and<br />

further reforms required the unbundling of supply and transportation and the releasing of some gas supplies to<br />

competitors.<br />

The evidence on market de-coupling is mixed. Silverstovs et el (2005) investigated the degree of integration of<br />

natural gas markets in Europe, North America and Japan in the period early 1990s to 2004 using a principal<br />

components and a cointegration approach where oil and gas markets interacted. They find high levels of gas market<br />

integration within Europe, between Europe and Japan as well as within the North American market.<br />

Mazighi (2005) noted that the UK’s National Balancing Point (NBP) gas price was significantly related to oil<br />

prices. There is also evidence of a statistically significant relationship between oil and gas prices and industrial stock<br />

prices. In testing the long-term behaviour of the UK National Balancing Point (NBP) gas prices he also finds a<br />

relationship between the changes in the volume of manufactured production using regression analysis. As oil is used<br />

as a source of industrial power it follows that there is a relationship between industrial stock prices as well as<br />

alternative energy prices. Mazighi (2005) finds that more than 80% of gas price changes in the US market were not<br />

driven by their fundamental values. In other words other factors such as, oil price changes need to be considered to<br />

account for gas price changes. However, Mazighi (2005) suggests that, in the long-term and in accordance with<br />

economic theory, the evolution of prices of natural gas and any other homogenous commodity is guided by supply<br />

and demand.<br />

Asche (2006) also examined whether or not de-coupling of natural gas prices from prices of other energy<br />

commodities (such as oil and electricity) had taken place in the liberalised UK and in the regulated continental gas<br />

markets after the Interconnector had integrated these markets after 1998. Asche finds that monthly price data from<br />

1995 to 1998 indicated a highly integrated market where wholesale demand appeared to be for energy generally,<br />

rather than specifically for oil or gas.<br />

By 2003 the UK gas market was highly liberalised, according to Panagiotidis and Rutledge (2007), who<br />

investigated the relationship between UK wholesale gas prices and Brent oil prices over the period 1996 to 2003 to<br />

test whether or not orthodox liberalisation theory applied and whether or not oil and gas prices had de-coupled.<br />

Using cointegration techniques and tests of exogeneity of oil prices through impulse response functions, their<br />

findings generally did not support the assumption of de-coupling of prices in the relatively highly liberalised UK<br />

market. The results may at least have indicated that progress in de-regulation had been made.<br />

Studies of the connection between oil and gas markets have suffered because of the use of spot prices only when<br />

a growing body of evidence impresses the need to take into account gas price expectations embodied in futures<br />

prices and thus prices that embody forecasts of macro-economic data as well as seasonality and storage factors. For<br />

example, Modjtahedi and Movassagh (2005) studied natural gas futures prices and find spot and futures prices are<br />

non-stationary with trends due to positive drifts in the random walk components of the prices. They find that market<br />

forecast errors are stationary and that futures prices are less than expected future spot prices (implying futures prices<br />

are backward dated). They also find that the bias in futures prices is time varying and that futures have statistically<br />

significant market timing ability.<br />

Mu (2007) examined how weather shocks impact asset price dynamics in the US natural gas futures market<br />

revealing a significant weather effect on the conditional mean and volatility of gas futures returns. Marzo and<br />

Zagaglia (2007) modelled the joint movements of daily returns on one month futures for crude oil and natural gas<br />

265


using a multivariate GARCH 1 with dynamic conditional correlations and elliptical distributions. They find that the<br />

conditional correlation between the futures prices of natural gas and crude oil had risen over the receding 5 years,<br />

but the correlation was low on average over most of the sample suggesting that futures markets did not have an<br />

established history of pricing natural gas as a function of oil market developments. Geman and Ohana (2009)<br />

remind their readers that it is central in the theory of storage, that there is a role for inventory in explaining the shape<br />

of the forward curve and spot price volatility in commodity markets. They find that the negative relationship<br />

between price volatility and inventory is globally significant for crude oil and the negative correlation applies only<br />

during periods of scarcity and increases during winter months for natural gas.<br />

Again, the evidence of relationships between oil and gas spot markets, as they are also influenced by futures<br />

markets, is mixed. Overall, the forgoing studies are useful in providing information and empirical evidence relating<br />

to the similarity of oil and gas spot and futures price volatility, the connection between oil and gas spot and futures<br />

prices and the integration of gas markets over a period of de-regulation of natural gas markets particularly in the<br />

USA and the UK. However, this paper does not delve into detailed analysis of de-regulation policies in each<br />

country.<br />

Initially assuming the existence of a relationships between gas and oil markets, the study considers a full period<br />

(from the beginning of January 2001 to the end of May 2010) and interim periods of stable oil prices (from early<br />

January 2001 to early June 2003); a period of rapidly rising oil prices (from early June 2003 to early June 2007) and<br />

a period of downward volatility of oil prices (from early June 2007 until the end of May 2010. This latter period<br />

coincides with the period of the global financial crisis). Perusal of the graphs of oil prices from 2001 to 2010 in<br />

Figure 1 indicates the above described broad periods of price movement, sourced from the data in this study. It is<br />

deemed appropriate to divide the study into these periods to partially control for time varying relationships and to<br />

attempt to reduce the impact of apparent structural breaks in the data.<br />

160<br />

140<br />

120<br />

100<br />

80<br />

60<br />

40<br />

20<br />

0<br />

OILOPEC<br />

01 02 03 04 05 06 07 08 09<br />

Figure 1: Oil Price Volatility<br />

The study compares price relationships in the US and the UK over the different periods of oil price volatility. The<br />

issues covered in this paper are as follows:<br />

1. Are unlagged spot oil price changes, the gas futures price changes and gas spot price change relationships<br />

statistically significant?<br />

2. In optimally lagged data, are the level series price relationships statistically significant and have these price<br />

relationships changed?<br />

3. In lagged data are there long-term cointegrating relationships in spot oil prices and spot and gas futures<br />

prices?<br />

4. In short-term dynamics, which markets are the major exogenous forces in the lagged models?<br />

1 Generalised Autoregressive Conditional Heteroskedasticity model.<br />

266


In other words, the central issue remains as to whether or not natural gas market liberalisation policies and deregulation<br />

legislation in the USA and the UK are working. In examining these issues it is expected that further<br />

information will be provided in relation to each of the above studies, but specifically this study is expected to<br />

expand that of Panagiotidis and Rutledge (2007). That is, the study expands to provides a comparison of the<br />

progress of oil and gas price de-coupling and therefore gas market liberalisation in the UK as well as the USA<br />

markets; the study updates the period of analysis of previous studies by capturing the period of the recent global<br />

financial crisis; it allows for structural breaks in data and time varying relationships by investigating sub-periods of<br />

oil price stability and upward and downward oil price volatility; and it takes into account gas futures prices.<br />

2. The Model and Method<br />

In preliminary analysis, an unlagged model of first differences of the price index series is examined over a full<br />

period from 01/01/2001 to 31/05/2010 and over the sub-periods described above for both US and UK markets. The<br />

relative connectivity between oil, spot gas and gas futures price changes is examined. The preliminary analysis<br />

provides information on the strength of relationships in each spot natural gas market and how these relationships<br />

change in the various sub-periods.<br />

Based on evidence of the studies of the nexus between oil and gas prices (For example, Krichene 2002; Ewing et al.,<br />

2002; Okogu, 2002; Serlitis & Rangel-Ruiz, 2004; Adeleman & Watkins, 2005; Mazhigi, 2005; Asche, 2006; Eng,<br />

2006; Regnier, 2007; Panagiotidis & Rutledge, 2007) and taking into account gas futures price interaction (For<br />

example, Modjtahedi and Movassagh, 2005; Mu, 2007; Marzo & Zagaglia, 2007 and Geman & Ohana, 2009), the<br />

following unlagged and lagged models are proposed for testing, with the various series logarithmically transformed.<br />

The unlagged models are as follows:<br />

� P � � � � �P<br />

) � � ( �P<br />

) � e<br />

1)<br />

gst<br />

t<br />

1 ( t ost<br />

2t<br />

gft<br />

t<br />

Where;<br />

� Pgs , �P<br />

t gft<br />

and os P � are the changes in the spot gas price, the gas futures price and the spot oil price<br />

respectively in each of the country markets.<br />

In the main analysis, optimally lagged level series data are initially examined in a vector autoregressive model<br />

(VAR) for each of USA and UK markets. If the series examined are cointegrated a vector error correction model<br />

(VECM) is specified over the various periods of the study in order to confirm long-term equilibrium relationships<br />

and to test for short-run dynamics and exogeneity.<br />

Based on Equation 1 specified in level series, the following model in its functional form is specified. Note all<br />

variables on the right hand side of the equation are specified in both an unlagged and a lagged form from t �1<br />

to<br />

t � n with n being the optimal lagged deduced from lag exclusion tests and lag order information criteria. The<br />

endogenous variable is also lagged on the right hand side of the equation.<br />

P � f P , P )<br />

2)<br />

t<br />

gs<br />

( os gf<br />

3. The Data and Methodology<br />

There is a two stage methodology in this paper. In preliminary analysis is found in unit root tests that level series<br />

prices and errors of those relationships are non-stationary. First differences of those prices and the errors of the first<br />

differenced relationships are stationary. The processes are therefore integrated and non-stationary. However, the<br />

study first examines unlagged relationships in a regression model. This is specified in price changes to remove the<br />

problem of serial correlation in the errors that is found if the model was specified in level series. The next stage of<br />

the method is to examine a lagged bivariate model specified in level series prices. The optimal lags are decided with<br />

lag length criteria and lag order tests. If cointegration is discovered a VECM is specified to confirm cointegration<br />

and test for exogeneity.<br />

267


Daily price data for the full period of the study are obtained from the DataStream database for all of the<br />

variables in the models. The spot gas price in the USA is the Henry Hub (HH) gas price. The HH is an index in<br />

dollars per million cubic metres of British Thermal Units. The delivery point is a pipeline interchange near Erath,<br />

Louisiana, where a number of interstate and intrastate pipelines interconnect through a header system operated by<br />

the Sabine Pipe Line. It is also the standard delivery point for the NYMEX natural gas futures contract in the US. It<br />

is considered a representative indicator of US gas prices.<br />

The spot gas price in the UK is the National Balancing Point (NBP) gas price for UK or London. The NBP is a<br />

virtual trading location for the sale and purchase and exchange of UK natural gas. It is the pricing and delivery point<br />

for the Intercontinental Exchange (ICE) natural gas futures contract. It is the most liquid gas trading point in Europe<br />

and is a major influence on the price that domestic consumers pay for their gas at home. Gas at the NBP trades in<br />

pence per therm. It is similar in concept to the Henry Hub in the United States, but differs in that it is not an actual<br />

physical location 2 .<br />

The spot oil price is the Organisation of the Petroleum Exporting Countries (OPEC) oil price. The OPEC cartel<br />

(that is, there is a formal/explicit agreement among competing firms) consists of net oil-exporting countries<br />

primarily made up of Algeria, Angola, Ecuador, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, the United<br />

Arab Emirates, and Venezuela. The OPEC prices reflect the market power of the group of net oil exporting countries<br />

that control the exports collectively of around 40% of the world’s oil requirements. These oil prices, though formed<br />

through the market power induced by cartel behaviour, are assumed the major driver of the global crude oil price.<br />

The UK gas futures price is the ICE London or UK natural gas futures price for 6 months. It is a Reuter’s<br />

continuation series, which gives the data for 6 months forward. The USA gas futures price is the NYMEX natural<br />

gas futures prices for 6 months, which is 6 month forward rate. It starts at the 6th nearest contract month, which<br />

forms the first values for the continuous series until the first business day of the nearest contract month when, at this<br />

point, the next contract month is taken. These gas futures prices are considered representative of UK and US gas<br />

futures markets respectively. The gas futures prices selected represent gas market price expectations as the data<br />

embody various global and country economic forecast data and information that impact price expectations such as,<br />

exchange rate, inflation, interest rates, growth rates and also storage factors and seasonal effects on gas demand in<br />

the USA and in Europe and the UK. Due to unavailability of data for the NBP, the UK study cannot examine the<br />

first sub-period of relative oil price stability. It examines for the UK the second and third periods individually and<br />

the full period for the UK thus covers both sub-periods of upward and downward oil price volatility.<br />

4. Preliminary Findings<br />

The findings of the analysis are reported as preliminary analysis, preliminary findings and main findings as follows:<br />

Preliminary analysis of all unlagged level series for both the USA and the UK and in all periods of the study<br />

indicates skewness and kurtosis problems, which indicate lack of normality and uniformity (tests from Jarque &<br />

Bera, 1987). This is a violation of the assumptions of ordinary least squares (OLS) regressions and indicates, in turn,<br />

problems in serial correlation of the errors for each of the regression relationships. First differencing removes the<br />

problems of serial correlation in the errors of the first difference relationships (according to Durbin and Watson<br />

(DW) tests (Durbin and Watson, 1971), but White tests indicate that heteroskedasticity problems remain in the<br />

errors thus indicating model misspecification. An autoregressive conditional heteroskedasticity (ARCH) model is<br />

deemed more suitable for analysis. The results of this analysis are reported in Table 1.<br />

268


Regression Model<br />

(Equation 1)<br />

Adjusted R Square z Statistics:<br />

Spot Oil /Gas<br />

Futures in price<br />

changes<br />

Standard Error of<br />

Regression<br />

Durbin-<br />

Watson<br />

Statistic<br />

Variance Equation<br />

Coefficient<br />

ARCH/GARCH<br />

US Spot Gas Price<br />

Changes: Period 1<br />

0.0011 1.4534/0.7073 0.5386 2.0750 2.1763/0.3418<br />

Period 2 0.0735* 6.3355*/10.1860* 0.3060* 1.9919* 0.1349*/0.8705*<br />

Period 3 0.0990* 7.9450*/5.6669* 0.2106* 2.1873* 0.1669*/0.7524*<br />

Full Period 0.0366* 10.4862*/10.8608* 0.3572* 2.0778* 0.2061*/0.8609*<br />

UK Spot Gas: Price<br />

Changes<br />

Period 1<br />

0.0002 0.2275/0.3931 6.3800 2.2445 0.2276*/0.8805*<br />

Period 2 -0.0006 0.6469/-0.7428 3.0404 2.2748 0.1979*/0.7781*<br />

Full Period 0.0039* 1.1086/4.9908* 5.2515 2.2574* 0.1866*/0.8782*<br />

Note: Significance levels for spot oil z statistics and ARCH/GARCH terms are, no asterisk means no significance; * denotes<br />

significant at 1% level and ** denotes significant at the 5% level and *** denotes significant at the 10% level.<br />

Table 1: Results of Spot Gas First Differenced Models<br />

Whilst the explanatory power of the models for USA spot gas prices changes is low over all periods of the<br />

study, it has improved over the periods 2 and 3. The results are significant at the 1% level and serial correlation does<br />

not detract from the reliability of the model’s parameters as indicated by DW statistics. The z statistics for each of<br />

spot oil and gas futures are statistically significant over the full period of the study and for sub-periods 2 and 3. The<br />

magnitude of the z statistic for spot oil has increased through the second sub-period (upward volatility of oil prices)<br />

and the third period (downward volatility of oil prices) and is greatest in magnitude over the full period.<br />

In unlagged data, it is evident that the USA gas market has not de-coupled with the oil market; the relationship<br />

between gas and oil markets having increased over the full period studied. However, the z statistics for US gas<br />

futures are also significant at the 1% level over the second, third and full periods. The magnitude of the z statistic for<br />

US gas futures is greatest over the full period of the study (and greater than that statistic for spot oil over the full<br />

period) and is stronger than that for spot oil in the second sub-period of upward oil price volatility than in the first<br />

and third sub-periods (that is, in periods of oil price stability and downward oil price volatility respectively). This<br />

indicates that economic forecasts, seasonality and storage factors embodied in US gas futures price changes are<br />

important in US gas markets over periods of upward price volatility, downward volatility and over the full period.<br />

This evidence indicates that, though the nexus between gas and oil prices has not been broken, sound progress has<br />

been made in deregulation of US gas markets as seen in the connection of gas futures price changes to spot gas price<br />

changes.<br />

Evidence of de-coupling in the UK gas market is provided in unlagged data with the lack of significance of the<br />

spot oil price change variable in any period of the study. The explanatory power in the models for all periods is<br />

lower for the UK spot gas market. None of the z statistics for the independent variables (that is, the spot oil and UK<br />

gas futures variables) is statistically significant in any period except in the full period of study where the significant<br />

variable is gas futures price changes. However, it appears that economic expectations, seasonality and storage issues<br />

as reflected in gas futures price changes are not as important in UK gas markets. In the UK the nexus between oil<br />

and gas appears to have been broken and gas price expectations embodied in gas futures price changes are<br />

significant over the full period of the study. In the UK it appears that deregulation policies have acted to liberalise<br />

the spot gas market over the full period of the study. In the next stage of the preliminary analysis the level series, the<br />

first differenced series and the respective errors of these relationships for each country and each period under study<br />

are tested for unit roots using Augmented Dickey and Fuller (ADF) tests (Dickey & Fuller, 1981). The results of<br />

these tests are shown in Table 2.<br />

269


Variable t statistic (level series prices) t statistic (first differences)<br />

EQUATION ONE VARIABLES:<br />

US spot gas:<br />

Period 1 -3.6194* -16.0758*<br />

Period 2 -2.4680 -30.3681*<br />

Period 3 -1.1756 -28.2260*<br />

Full Period -3.2471** -30.8007*<br />

Spot Oil:<br />

Period 1 -1.9851 -20.5592*<br />

Period 2 -1.2215 -25.4412*<br />

Period 3 -1.2526 -20.5072*<br />

Full period -1.3347 -25.2201*<br />

US gas futures:<br />

Period 1 -0.2257 -27.6259*<br />

Period 2 -1.6665 -32.1152*<br />

Period 3 -1.1074 -29.1183*<br />

Full Period -1.8914 -50.5273*<br />

US spot gas errors:<br />

Period 1 -5.5641* -15.0709*<br />

Period 2 -3.9899* -32.2051*<br />

Period 3 -2.0991 -27.2291*<br />

Full Period -2.9969** -30.5471*<br />

EQUATION ONE VARIABLES:<br />

UK spot gas:<br />

Period 1 -4.1836* -18.6215*<br />

Period 2 -2.6305*** -23.1402*<br />

Full Period -4.6458* -23.7836*<br />

Spot Oil:<br />

Period 1 -0.9171 -26.5189*<br />

Period 2 -1.2494 -20.4901*<br />

Full Period -1.4947 -22.1242*<br />

UK gas futures:<br />

Period 1 -1.6575 -31.4909*<br />

Period 2 -1.0020 -25.9100*<br />

Full Period -2.0604 -40.8704*<br />

UK spot gas errors:<br />

Period 1 -4.4892* -18.3592*<br />

Period 2 -4.3887* -23.1257*<br />

Full Period -5.0251* -23.6352*<br />

Note: No asterisk means non-significance. ADF test results are shown. PP unit root tests are not reported, but confirm the results of the ADF<br />

tests. *Significance levels are at 1%, ** at 5% and *** 10%. The full period for the USA is from 1/1/2001 to 31/5/2010. The first period for the<br />

USA is from 1/1/2001 to 2/6/2003 (oil price stability), the second period for the USA is from 2/6/2003 to 1/6/2007 (upward oil price volatility)<br />

and the third period USA is from 1/6/2007 to 31/5/2010 (downward oil price volatility). The first period in the UK is the period of upward<br />

volatility from 2/6/2003 to 1/6/2007. This is due to limitations in NBP spot gas price data, which commence from 2/6/2003. The second period<br />

for the UK is from 1/6/2007 to 31/5/2010 (period of oil price downward volatility). The full period for the UK is thus from 2/6/2003 to<br />

31/5/2010.<br />

Table 2: Unit Root Tests<br />

Table 2 results indicate overall, that the level series and errors of the level series relationships are non-stationary and<br />

the first differenced series and the errors of the first differenced relationships are stationary. This enables a<br />

conclusion that, in each case for each country and for each period under study, the processes are integrated and nonstationary<br />

and this in turn enables a move to the main analysis. That is, to apply all level series to a VAR based<br />

Johansen (Johansen, 1988) cointegration test. If cointegration is present, a VECM 3 is specified to confirm long-term<br />

relationships and to test short-term dynamics of those relationships in Granger Causality tests (Granger, 1988).<br />

3 Once it is ascertained that the variables are I (1) and optimally lagged, a vector error correction model (VECM) is used in order to confirm<br />

cointegration and test causality. The VECM is a re-parameterised version of the unrestricted VAR and is appropriate when our variables are I (1)<br />

and cointegrated. In the presence of I (1) variables but no cointegration, causality would be studied from the VAR model specified in first<br />

differences.<br />

270


5. Main Findings<br />

Lag order and lag exclusion tests are conducted to ascribe an optimal lag for cointegration and causality testing. The<br />

models were initially tested for stability over all periods of the study for both US and UK markets using stability<br />

condition tests. The findings are that the models are stable, with no root lying outside the relative unit root circle.<br />

The results for the lag order and then cointegration and causality tests are shown in Table 3.<br />

Model Number of Cointegrating<br />

Relationships According to<br />

Trace and Eigen-value tests<br />

EQUATION TWO:<br />

US Spot Gas:<br />

Period 1<br />

Period 2<br />

Period 3<br />

Full Period<br />

EQUATION TWO:<br />

UK Spot Gas:<br />

Period 1<br />

Period 2<br />

Full Period<br />

Optimal Lag<br />

Order in<br />

Days<br />

2** 3*<br />

1** (According to Trace test<br />

only)<br />

4*<br />

None 3*<br />

2** 4*<br />

2** 3*<br />

None 2*<br />

2** 3*<br />

Granger Causality (Chi Square Statistic)<br />

US Gas Futures Granger causes US Spot Gas (135.4754*). Spot<br />

gas causes gas futures (6.2797***). Note stronger Granger<br />

causality of gas futures to spot gas.<br />

No significant Granger causality between Spot Oil and Spot<br />

US Gas.<br />

US Gas Futures Granger causes US Spot Gas (120.7309*).<br />

No significant Granger causality between Spot Oil and US<br />

Spot Gas.<br />

Within the model for this sub-period US Gas Futures significantly<br />

Granger causes Spot Oil (64.1979*)<br />

US Gas Futures Granger causes US Spot Gas (249.1941*).<br />

No significant Granger causality between Spot Oil and US<br />

Spot Gas.<br />

Within the model for this sub-period US Gas Futures significantly<br />

Granger causes Spot Oil (73.4092*) as does US Spot Gas<br />

(9.4335**)<br />

US Gas Futures Granger causes US Spot Gas (346.2631*).<br />

No significant Granger causality between Spot Oil and US<br />

Spot Gas.<br />

Within the model and over the full period, US Gas Futures<br />

significantly Granger cause Spot Oil (127.8207*)<br />

No significant Granger causality between Spot Oil and UK<br />

Spot Gas.<br />

No significant Granger causality from UK Gas Futures to UK Spot<br />

Gas.<br />

Within the model for this period, UK Gas Futures significantly<br />

Granger causes Spot Oil (15.2450*).<br />

No significant Granger causality between Spot Oil and UK<br />

Spot Gas.<br />

Gas futures Granger cause spot gas (33.1187*).<br />

Granger causality is significant from Spot Oil to UK Gas Futures<br />

(11.6501*) and from UK Gas Futures to Spot Oil (86.7800*) with<br />

the latter the stronger causal relationship.<br />

No significant Granger causality from UK Gas Futures to UK Spot<br />

Gas.<br />

No significant Granger causality between Spot Oil and UK<br />

Spot Gas.<br />

Within this model, Spot Oil Granger causes UK Gas Futures<br />

(8.5459***) and UK Gas Futures Granger causes Spot Oil<br />

(83.7156*). The latter driver relationship is stronger according to<br />

the magnitude of the Chi Square statistic.<br />

Note: The Johansen cointegration tests take the assumption that there is a linear deterministic trend in the data. Optimal lags are decided based on<br />

the majority significance of the Likelihood Ratio, the Final Prediction Error, the Akaike, Schwarz and Hannan-Quinn information criteria. The<br />

number of cointegrating equations is based on both maximum eigenvalues and trace statistics. In the number of cointegrating relationships, no<br />

271


asterisk means no significance. * denotes significance at the 1% level, ** at the 5% level and *** at the 10% level. The causality tests show<br />

similar levels of Chi Squared statistical significance.<br />

Table 3: Optimal Lags and Cointegration and Causality Test Results<br />

In both markets over the long-term there is evidence of cointegration in the first sub-period for the US (the<br />

period of oil price stability), the first sub-period for the UK (the period of upward price volatility) and in each<br />

market over the full periods of the studies. This represents evidence that whilst short-term causal relationships may<br />

show de-coupling evidence, there remains a long-term relationship between oil and gas prices and gas futures prices<br />

whereby these variables move in a similar way and come together to stability. This represents evidence that in the<br />

long-term the markets have not fully de-coupled and that in both markets deregulation policies still have some<br />

distance to go in achieving full gas market liberalisation.<br />

The most important finding, for both the US and the UK markets over the sub-periods of the study and over the<br />

full period, is that there is no significant short-term causal relationship between spot oil and spot gas. In addition, it<br />

is evident that in the US, gas futures Granger cause gas spot prices over the various sub-periods and over the full<br />

period of the study. In the UK there is significant Granger causality running from gas futures to spot gas over the<br />

full period of the study. There is however evidence within the model, of Granger causality between gas futures and<br />

spot oil for both markets. Overall, in the short-term there is evidence of de-coupling in both markets, but the nexus<br />

between oil and gas futures prices is not fully broken.<br />

6. Conclusion<br />

The theoretical base for this paper lies in the re-visitation of market liberalisation theory by Asche (2000), but<br />

focuses on updating and expanding a UK cointegration and causality study by Panagiotidis and Rutledge (2007).<br />

Just as mixed evidence is produced by the authors reviewed, mixed evidence is provided in this paper. The paper<br />

commences with the assumption that there exists a relationship between oil and gas prices as implied, for example,<br />

by Okogu (2002) and Eng (2006). Mazhigi (2005) finds the UK gas price was significantly related to oil prices.<br />

Researchers such as, Krichene (2002), Ewing et al., (2002), Adeleman and Watkins (2005) and Regnier (2007) find<br />

that oil and gas markets possess similar trends in stochastic volatility.The importance of gas futures in their<br />

embodiment of price expectations based on economic, seasonal and storage information is put forward by<br />

researchers such as Modjtahedi and Movassagh (2005), Mu (2007), Marzo and Zagaglia (2007) and Geman and<br />

Ohana (2009). Serlitis and Rangel-Ruiz (2004), find de-coupling of oil and gas prices as a result of de-regulation in<br />

the USA. Siliverstovs et el (2005), find high levels of gas market integration within Europe and North America.<br />

Asche (2006) finds that monthly price data from 1995 to 1998 in the UK indicated a highly integrated gas market.<br />

Panagiotidis and Rutledge (2007) findings generally did not support the assumption of de-coupling of prices in the<br />

relatively highly liberalised UK market, but imply that progress was being made in deregulation policies.<br />

The results of the study in this paper are summarised as follows: In unlagged data and regression analysis it is<br />

evident that overall the UK has achieved progress in de-coupling of oil and gas markets. The nexus between spot oil<br />

and spot gas markets in the US is stronger, but so is the relationship between US gas futures and US spot gas. When<br />

data are lagged and considered in short-term causal relationships there is strong evidence of de-coupling of spot oil<br />

and spot gas prices in both markets over all sub-periods and full periods of the study. This de-coupling effect in the<br />

short term is supported by evidence of significant causality running from gas futures to spot gas, though within the<br />

multivariate model, there is evidence a causal relationship between gas futures and spot oil prices for both countries.<br />

However, when long-term relationships are considered in lagged data there is evidence of cointegration in<br />

multivariate models where spot gas, gas futures and oil prices interact and this means that the nexus between oil and<br />

gas prices has not been severed in either market. For the US this nexus between spot oil, gas futures and spot gas<br />

markets appears stronger in the first period of oil price stability from 1/1/2001 to the 2/6/2003, but also over the full<br />

period studied from 1/1/2001 to 31/5/2010. There is no discernable connection between these markets during the<br />

period global financial crisis and partial recovery, which included rapidly falling oil prices. In the UK, the nexus<br />

between spot oil, gas futures and spot gas appears strongest in the period of upward oil price volatility from<br />

2/6/2003 to 1/6/2007, and over the full period of that study from 2/6/2003 to 31/5/2010. Again there is no<br />

discernable connection between these markets in the UK over the period of the global financial crisis and partial<br />

recovery.<br />

272


Progress has been made in both the US and the UK in de-regulation with perhaps stronger evidence of decoupling<br />

of oil and gas markets in the UK, but it cannot be said that the nexus between oil and gas prices in either<br />

country has been severed when long-term relationships between the variables are considered. There is no<br />

discernable evidence to indicate that any relationships identified in this study vary significantly from one sub-period<br />

to another. As a final comment there is evidence in the short-term (particularly in the US) of the importance of gas<br />

futures markets as drivers of spot gas prices thus re-emphasising the importance of price expectations that embody<br />

economic forecasts, and seasonality and storage factors.<br />

7. References<br />

Adeleman, M A., and Watkins, G C., (2005), “US Oil and Natural Gas Reserve Prices, 1982-2003”, Energy<br />

Economics, <strong>Volume</strong> 27, Issue 4, July, Pages 553-571.<br />

Asche, F., (2000), “European Market Integration for Gas? <strong>Volume</strong> Flexibility and Political Risk”, CESIFO Working<br />

Paper, Number 358, November.<br />

Asche, F., (2006), “The UK Market for Natural Gas, Oil and Electricity: Are the Prices Decoupled?” The Energy<br />

Journal, <strong>Volume</strong> 27, Issue 2, 27-40.<br />

Dickey, D. A., and Fuller, W. A., (1981), “Likelihood Ratio Statistics for Autoregressive Time Series within a Unit<br />

Root”, Econometrica, <strong>Volume</strong> 49, 1022-1057.<br />

Durbin, J., and Watson, G. S., (1971), “Testing for Serial Correlation in Least Squares Regression-111”, Biometrika,<br />

<strong>Volume</strong> 58, 1-42.<br />

Eng, G., (2006) “A Formula for LNG Pricing”, Ministry for Economic Development New Zealand,<br />

http://www.med.govt.nz/Templates/multipageDocumentTOC___23939.aspx.<br />

Ewing, B T., Malik, F., and Ozfidan, O., (2002), Volatility Transmission in the Oil and Natural Gas Markets”,<br />

Energy Economics, <strong>Volume</strong> 24, Issue 6, November, Pages 525-538.<br />

Geman, H and Ohana, S., (2009), “Forward Curves, Scarcity and Price Volatility in Oil and Natural Gas Markets”,<br />

Energy Economics, <strong>Volume</strong> 31, Issue 4, July, Pages 576-585.<br />

Granger, C. W. J., (1988), “Some Recent Developments in a Concept of Causality”, Journal of Econometrics,<br />

<strong>Volume</strong> 39, 199-211.<br />

Jarque, C. M., and Bera, A. K., 1987, “A Test for Normality of Observations and Regression Residuals”<br />

International Statistical Review, <strong>Volume</strong> 55, pp. 163-172.<br />

Johansen, S, (1988), “Statistical Analysis of Cointegration Vectors”, Journal of Economic Dynamics and Control,<br />

<strong>Volume</strong> 12, Pages 231-254.<br />

Krichene, N., (2002), “World Crude Oil and Natural Gas: A Demand and Supply Model”, Energy Economics,<br />

<strong>Volume</strong> 24, Issue 6, November, Pages 557-576.<br />

Marzo, M., and Zagaglia, P., (2007), “A Note on the Conditional Correlation between Energy Prices: Evidence from<br />

Future Markets”, Energy Economics, <strong>Volume</strong> 30, Issue 5, September, Pages 2454-2458.<br />

Mazighi, Ahmed El Hashemite, (2005), “Henry Hub and National Balancing Point Prices: What Will be the<br />

International Gas Price Reference”, Organisation of Petroleum Exporting Countries, Research Paper,<br />

September.<br />

Modjtahedi, B., and Movassagh, N., (2005), “Natural Gas Futures: Bias, Predictive Performance, and the Theory of<br />

Storage”, Energy Economics, <strong>Volume</strong> 27, Issue 4, July, Pages 617-637.<br />

Mu, X., (2007) “Weather, Storage and Natural Gas Price Dynamics: Fundamentals and Volatility”, Energy<br />

Economics, <strong>Volume</strong> 29, Issue 1, January, Pages 46-63.<br />

Okogu, B E., (2002), “Issues in Global Natural Gas: A Primer and Analysis”, IMF Working Paper, Number 40,<br />

February.<br />

Panagiotidis, T., and Rutledge, E., (2007), “Oil and Gas Markets in the UK: Evidence from a Cointegration<br />

Approach”, Energy Economics, <strong>Volume</strong> 29, Issue 2, March, Pages 329-347.<br />

Regnier, E., (2007), “Oil and Energy Price Volatility”, Energy Economics, <strong>Volume</strong> 29, Issue 3, May, Pages 405-<br />

427.<br />

Serlitis, A., and Rangel-Ruiz, R., (2004), “Testing for Common Features in North American Energy Markets”,<br />

Energy Economics, <strong>Volume</strong> 26, Issue 3, May, Pages 401-414.<br />

Silverstovs, B., L’Hegaret, G., Neumann, A., and von Hirschhausen, C., (2005), “International Market Integration<br />

for Natural Gas? A Cointegration Analysis of Prices in Europe, North America and Japan”, Energy Economics,<br />

<strong>Volume</strong> 27, Issue 4, July, Pages 603-615.<br />

273


QUANTITATIVE EASING ENGINEERED BY THE FED, AND PRICES OF INTERNATIONALLY<br />

TRADED AND DOLLAR DENOMINATED COMMODITIES AND PRECIOUS METALS<br />

Gueorgui I. Kolev, EDHEC Business School, France<br />

Email: gueorgui.kolev@edhec.edu<br />

Abstract. I study the impact of quantitative easing by the FED on the prices of internationally traded and dollar denominated<br />

commodities (oil) and precious metals (gold, silver, platinum and palladium). Finite distributed lag models suggest that the long run<br />

multipliers in regressions of the log of precious metals or commodities prices on the log of the US monetary base is about one, i.e., a<br />

permanent one percent increase in the US monetary base results in one percent increase in the prices of these commodities and<br />

precious metals. In other words, the quantitative easing actions by the FED are purely inflationary as predicted by classical economic<br />

theory. I also present an event study of the quantitative easing announcement effects on prices, and these are small for precious metals<br />

and negative for oil. Overall this study suggests certain dissonance between the price behavior on one hand in the bond market, and<br />

on the other in the commodities and precious metals markets. It also suggests that taking the US CPI reported by the Bureau of Labor<br />

Statistics to be identical with the concept of inflation can be quite misleading. Further research is needed to determine why the implied<br />

inflation in the US CPI and the US government debt market is so different from the implied inflation in the US dollar denominated<br />

and internationally traded commodities and precious metals.<br />

Keywords: quantitative easing, monetary policy, precious metals’ prices, oil prices<br />

JEL classification: E5, G1<br />

1 Introduction<br />

Towards the end of September 2008, the American FED started unprecedented monetary expansion doubling the US<br />

monetary base. This has been done through what Taylor (2009) calls mondustrial policy – financing certain sectors<br />

and firms by printing money.<br />

Cochrane (2011) worries that the US might enter deflation. As the nominal interest rates are close to 0 and so<br />

the US treasury bonds become effectively money, he argues, the FED has not enough influence over the events –<br />

whether the FED prints money or not does not matter for the US economy. He bases his observations on the fact that<br />

the US consumer price index has been flat since September 2008, and the spreads between US nominal treasury<br />

bonds and US real bonds (Treasury Inflation Protected Securities) has been negligible.<br />

I study the prices of commodities and precious metals that are internationally traded and denominated in US<br />

dollars, mostly after September 2008, when the quantitative easing has been started by the FED. Looking at the<br />

growth in prices of these internationally traded goods since September 2008, one sees that the US is already in a<br />

state of very high inflation, one might even say hyperinflation. I suggest that the mondustrial policy carried out by<br />

the FED might have not been so innocuous, and might threaten the solvency of the US government and their ability<br />

to borrow money in the future.<br />

The main point of this paper is to show that looking at the Consumer Price Index (CPI) and trying to decipher<br />

inflation is not a good idea. The CPI is an imaginary number that bureaucrats come up with, most probably trying to<br />

pursue political agenda. Even if the US Bureau of Labor Statistics are politically impartial, and are trying to do their<br />

job the best they can, still the CPI that they produce is not subjected to market discipline. If they get it wrong, by say<br />

reporting too high or too low CPI, no arbitrageur can enter and correct the “mispricing”, the CPI is not traded.<br />

On the other hand, looking at commodities and precious metals, it is very easy to see whether the US dollar is<br />

inflating, or not. Commodities and precious metals are commonly quoted in US dollars. The exchanges where they<br />

are traded have no incentive to misrepresent their value in dollars. If their value in dollars does not reflect their fair<br />

value, any arbitrageur is free to enter and by his trades and attempts to make money out of the mispricing, correct<br />

the mispricing.<br />

274


2 Preliminary Graphical Analysis<br />

Figure 1 below depicts the US monetary base measured in dollars, the spot price of oil (Crude Oil, Light-Sweet,<br />

Cushing, Oklahoma) and the price of gold (London, afternoon fixing). All the series are normalized by subtracting<br />

the mean of the series and dividing by the standard deviation. We can see from this picture the unprecedented<br />

growth in the US monetary base that started in September 2008. Before that, including the period before January<br />

2005 that is not in the graph, the US monetary base was growing at a steady, negligible rate. Then sometime in<br />

September 2008, the FED in a desperate move, trying to save the sinking US economy, doubled the monetary base.<br />

The first reaction to this is that the US government is monetizing its debt.<br />

Looking at the core US CPI, nothing happens in September 2008 (Cochrane, 2011, figure 6). If anything, the<br />

rate of inflation measured by the core CPI decreases between July 2008 and January 2009. This is very puzzling<br />

from the standard classical economic theory point of view. If we consider the identity MV = PY (Mankiw, 2010,<br />

ch.4), where M is the supply of money, V is the velocity of money, P is the price level, and Y is the real output of<br />

the US economy, we see that M has doubled, Y has not changed, so for P to stay fixed, the velocity of money V<br />

must have halved, the latter being extremely implausible. The other alternative is that the CPI is a very misleading<br />

measure of the price level P.<br />

Looking at the dollar denominated prices of oil and gold in Figure 1 below, we see a radically different story.<br />

The moment when the FED started printing out money, the prices of oil and gold started growing accordingly. This<br />

is exactly what one would expect from a classical economic theory point of view.<br />

-2 0 2 4<br />

01jan2005 01jul2006 01jan2008 01jul2009 01jan2011<br />

date<br />

monetary_base oil_price<br />

gold_price<br />

Figure 1. Time series evolution of the US monetary base, the price of gold and the price of oil. All variables are standardized by subtracting the<br />

mean and dividing by the standard deviation. The heavy vertical line corresponds to 24 September 2008 when the unprecedented growth in the<br />

US money supply started, the dashed and dotted vertical lines correspond to dates of FOMC quantitative easing announcements, 25 November<br />

2008, 1 December 2008, 16 December 2008, 28 January 2009, 18 March 2009, 12 August 2009, 23 September 2009, 4 November 2009, 10<br />

August 2010, and 21 September 2010.<br />

Focusing on the period where there was substantial variation in the US monetary base, that is after and<br />

including September 2008, Figure 2 below displays scatter plots of standardized oil and gold prices versus the<br />

standardized US monetary base. The values are contemporaneous, so these pictures do not capture any dynamics.<br />

The static relation between the gold price and the monetary base is almost linear. The thick line is the 45 degree line,<br />

and the dotted line is the best linear fit between the price of gold and the contemporaneous monetary base. They<br />

almost overlap, so for one standard deviation of increase in the monetary base after and including September 2008,<br />

the price of gold increases by one standard deviation too.<br />

Figure 2 below shows that the relation between the price of oil and the US monetary base is not that simple.<br />

This is to be expected as oil is also the single most important input to production, whereas gold is mostly used as an<br />

investment asset. Therefore the price of oil is expected to depend heavily on the state of the world economy, and the<br />

US economy plays a major role in the world industrial output. Figure 2 reveals a V shaped relation between the<br />

normalized price of oil and the normalized US monetary base. The right hand of the V cloud of points is more dense<br />

275


and, although the linear relation between the two variables over the whole sample is almost flat, the relation in the<br />

right hand side of the V is close to a standard deviation increase in monetary base resulting in a standard deviation<br />

increase in the standardized price of oil, i.e., close to the 45 degree line.<br />

-2 -1 0 1 2 3<br />

-.5 0 .5 1 1.5 2<br />

monetary_base<br />

gold_price monetary_base<br />

Fitted values<br />

-2 -1 0 1 2 3<br />

-.5 0 .5 1 1.5 2<br />

monetary_base<br />

oil_price monetary_base<br />

Fitted values<br />

Figure 2. Scatter plots of gold and oil prices versus the US monetary base. Data points are at weekly frequency (measured on Wednesdays) and<br />

cover subsample over which the US monetary base exhibits variation, from 1 September 2008 to 20 October 2010. Variables are standardized as<br />

in Figure 1 by subtracting the mean and dividing by the standard deviation, computed over the sample in Figure 1 (from 1 January 2005 to 20<br />

October 2010). The heavy sloped line is the 45 degree line, and the dashed sloped line is the best linear fit to the scatter.<br />

3 Dynamic Regression Analysis<br />

To further understand how the US monetary base affects the prices of oil, gold and other precious metals, I fit finite<br />

distributed lag models in Table 1 below. It is believed that the FED does not take into account prices of oil, gold,<br />

other precious metals, stock prices, housing prices etc., in setting their monetary policy. What the FED targets is the<br />

federal funds rate, keeping an eye on inflation and the output gap. Empirical research by monetary economist have<br />

shown that what is known as Taylor rule (Orphanides, 2008) best describes the behavior of the FED, at least on<br />

average and most of the time. Therefore it seems plausible to assume that, for example, in a regression of natural<br />

logarithm of the price of gold on the contemporaneous and lagged values of the natural logarithm of the US<br />

monetary base, that the contemporaneous and lagged values of the monetary base are strictly exogenous. Hence I<br />

maintain the strict exogeneity assumption in the regressions in Table 1 below:<br />

ln(Pt) = α + ∑j=0,1,..,n βj ln(Mt-j) + εt, E(εtMs) = 0, for each t and s,<br />

where ln(P) is in the various regressions the natural logarithm of the price of oil, gold, silver, platinum and<br />

palladium, ln(M) is the natural logarithm of the US monetary base.<br />

In the present context the most interesting estimate coming from the finite distributed lag models below is the<br />

Cumulative Dynamic Multiplier (also known as the long run multiplier), that shows the effect of a permanent unit<br />

increase in the regressor on the dependent variable in question. The long run multiplier is given by the sum of the<br />

contemporaneous effect and the effects of the lags.<br />

CDM = ∑j=0,1,..,n βj.<br />

Note that classical monetary theory has a prediction for the value of the long run multiplier – one percent permanent<br />

increase in the supply of money should result in exactly one percent increase in the price level, so CDM=1,<br />

according to classical economic theory. (As Milton Friedman famously said, “Inflation is always and everywhere a<br />

monetary phenomenon”, see Friedman, 1970.) That is, of course, if we believe that the prices of oil, gold and other<br />

precious metals are representative of the theoretical concept of inflation.<br />

276


In the bottom panel of Table 1 we see that when fitting the finite distributed lag models by Ordinary Least<br />

Squares (OLS), there are signs of severe positive autocorrelation (the autoregressive parameter r of the residual is<br />

always above 0.7, and most of the time above 0.8, and the Durbin-Watson statistic DW0 is very small, always below<br />

0.5), see Panel C, Table 1. From this, and also from theoretical considerations we might conclude that there is a unit<br />

root in the oil, precious metals and monetary base time series. Therefore in fitting the model by OLS we would be<br />

running spurious regression. To take care of this problem I follow McCallum (2010) and Kolev (2011) and estimate<br />

the finite distributed lag models by feasible Generalized Least Squares (GLS). McCallum (2010) and Kolev (2011)<br />

show that when feasible GLS is used in the presence of heavy autocorrelation, one obtains efficient estimates and<br />

test statistics having correct size, even if one or both of the regressor and regressand contain a unit root. By using<br />

feasible GLS one in effect lets the data determine what is the order of (quasi)differencing necessary to render the<br />

time series complying with the classical regression requirements. Hence I estimate the finite distributed lag models<br />

in Table 1 by the Cochrane & Orcutt (1949) feasible GLS procedure.<br />

I selected the number of lags to be included in the models by testing down for each variable, starting from 14<br />

lags. It turned out that the regression that required the largest number of lags is where the dependent variable is log<br />

oil price, 10 lags. Although the precious metals required less lags, I then applied the 10 lags structure across all of<br />

the dependent variables, with an eye on ease of comparison of models. Standard regression theory suggests that<br />

including lags that are potentially zero in the population is not too onerous, it just results in loss of some efficiency.<br />

Looking across Table 1, Panel A, we see that coefficients on the contemporaneous and lagged values of<br />

monetary base are mostly positive for the precious metals (but there are some which are negative here too), and<br />

somewhat mixed for oil. Therefore we focus on the value of the Cumulative Dynamic Multipliers (CDM), which are<br />

displayed in Panel B of Table 1. The first two rows of Panel B display the estimates of the CDM and the p-value<br />

associate with the null hypothesis that the CDM is equal to 0. We see that the CDM range between .77 (for oil) and<br />

1.56 (for palladium) and the null hypothesis that the CDMs are 0 can be rejected at any significance level. What we<br />

read from this information is that, for example, a one percent permanent increase in the US monetary base leads to a<br />

.77% increase in the price of oil quoted in USD. Similarly one percent increase in the US monetary base leads to a<br />

.77% increase in the price of gold quoted in USD.<br />

The third and the fourth rows of Panel B display the estimate of the quantity CDM - 1, and the associated test of<br />

the null hypothesis that the CDM=1. As we see the null hypothesis that CDM=1 cannot be rejected at standard<br />

significance levels for any of the time series, and the closest it comes to rejection is for gold, where the p-value of<br />

the test is .11. What we gather from Panel B of Table 1 is that the long term effect, i.e., the effect of one percent<br />

permanent increase in the US monetary base on the price of oil and precious metals in the long run, is positive, and<br />

most probably equal to 1, as predicted by classical monetary theory.<br />

(1) (2) (3) (4) (5)<br />

lnoil lngold lnsilver lnplatinum lnpalladium<br />

Panel A<br />

lnmonetary_base -0.0309 0.113 0.00467 0.355<br />

(0.275) (0.113) (0.203)<br />

**<br />

0.571<br />

(0.171)<br />

*<br />

(0.290)<br />

L.lnmonetary_base 0.697 ***<br />

0.170<br />

(0.252)<br />

*<br />

0.0886 0.163 0.109<br />

(0.101) (0.183) (0.161) (0.262)<br />

L2.lnmonetary_base 0.301 0.0259 0.0228 0.226 0.0751<br />

(0.252) (0.102) (0.183) (0.163) (0.264)<br />

L3.lnmonetary_base -0.211 -0.247<br />

(0.252)<br />

**<br />

-0.192 0.0808 0.136<br />

(0.101) (0.183) (0.163) (0.263)<br />

L4.lnmonetary_base 0.444 0.214<br />

(0.273)<br />

*<br />

0.451<br />

(0.112)<br />

**<br />

0.129 0.0661<br />

(0.198) (0.184) (0.295)<br />

L5.lnmonetary_base -0.664 **<br />

0.0460 -0.0896 -0.240 -0.216<br />

(0.269) (0.110) (0.195) (0.179) (0.288)<br />

L6.lnmonetary_base -0.728 ***<br />

0.157 0.0299 -0.146 -0.0820<br />

(0.272) (0.116) (0.197) (0.190) (0.305)<br />

L7.lnmonetary_base 0.650 ***<br />

0.222<br />

(0.232)<br />

**<br />

0.408<br />

(0.0971)<br />

**<br />

0.219 0.340<br />

(0.168) (0.159) (0.254)<br />

L8.lnmonetary_base -0.363 -0.0774 -0.0171 0.0563 0.0800<br />

277


(0.231) (0.0970) (0.167) (0.157) (0.252)<br />

L9.lnmonetary_base 0.0517 0.0744 0.0808 0.152 0.134<br />

(0.220) (0.0887) (0.160) (0.143) (0.230)<br />

L10.lnmonetary_base 0.622 ***<br />

0.0759 0.116 0.300<br />

(0.219) (0.0882) (0.160)<br />

**<br />

0.342<br />

(0.137) (0.227)<br />

Constant -6.874 *<br />

-4.174<br />

(3.817)<br />

**<br />

-10.23<br />

(2.021)<br />

***<br />

-11.49<br />

(3.214)<br />

***<br />

-16.60<br />

(1.825)<br />

***<br />

Panel B<br />

(4.234)<br />

Cumulative Dynamic Multiplier (CDM) .769 .773 .903 1.296 1.556<br />

Ho: CDM=0, p-value 0.005 0.000 0.000 0.000 0.000<br />

CDM - 1 -.230 -.226 -.096 .296 .556<br />

Ho: CDM = 1, p-value<br />

Panel C<br />

0.385 0.110 0.665 0.021 0.062<br />

Observations 106 100 102 100 100<br />

R 2<br />

0.857 0.992 0.730 0.990 0.947<br />

F 51.17 1057.3 22.16 804.7 142.3<br />

r 0.841 0.922 0.887 0.733 0.862<br />

DW0 0.271 0.198 0.280 0.497 0.223<br />

DW 1.674 1.935 1.858 1.535 1.236<br />

Table 1: Standard errors in parentheses (* p < 0.10, ** p < 0.05, *** p < 0.01). The data frequency is weekly, measurements on variables are<br />

taken on Wednesdays, and the sample runs from 1 September 2008 to 20 October 2010. The regressions are estimated by the Cochrane & Orcutt<br />

(1949) feasible GLS procedure.<br />

4 An Event Study of FOMC Announcements<br />

In this section I present an event study of the short term effects of the FOMC (Federal Open Market Committee)<br />

announcements of quantitative easing. Table 2 below displays the average price change from the closing price on the<br />

day before the announcement, to the closing price on one day after the announcement of quantitative easing by the<br />

FOMC. The event dates are given in the first column, for more details on these quantitative easing public<br />

announcements by the FOMC see Gagnon, Raskin, Remache & Sack (2010), Krishnamurthy & Vissing-Jorgensen<br />

(2011) and Wright (2011). For example, the first row of Panel A of Table 2 tells us that on the 25 November 2008<br />

the FOMC announced they will undertake quantitative easing. The average percentage change, measured in per day<br />

terms, between the closing price of oil on the 24 November 2008 and the closing price of oil on the 26 November<br />

2008, was an increase of 0.53%. The last row in Panel A of Table 2 displays the average price change on non-event<br />

dates.<br />

It should be pointed out that the FOMC has not been very forthcoming about its quantitative easing activities.<br />

By looking at the data we see that when the FOMC announces that they have decided to carry out quantitative<br />

easing, the action has already taken place. In other words the FOMC announcements seem to endorse retrospectively<br />

what the trading desks of the FED have already done through open market operations, a point made by Taylor<br />

(2009) as well. It is not clear who decides on these matters, two possibilities being that the decision is taken by the<br />

FOMC, but they hide it from the public for a while, the other being that anonymous people in the FED system take<br />

the decisions, and the FOMC by its announcements fulfills formal requirements of due process after the fact. This<br />

can be clearly seen by looking at Figure 1. The monetary base in the US grows from 915 030 mln USD on 24<br />

September 2008 to 1 483 267 mln USD on 19 November, amounting to 62% growth. Nevertheless, the first<br />

announcement of the FOMC mentioning quantitative easing takes place on the 25 November 2008, long after the<br />

fact. Therefore the results of the event study should be read cautiously.<br />

FOMC quantitative easing (1) (2) (3) (4) (5)<br />

announcement date<br />

Panel A<br />

D.lnoil D.lngold D.lnsilver D.lnplatinum D.lnpalladium<br />

25nov2008 0.00529 -0.00612 0.0128 0.0140 0.00256<br />

(0.0233)<br />

1dec2008 -0.0475<br />

(0.0330)<br />

(0.00975)<br />

0.00257<br />

(0.0138)<br />

278<br />

(0.0168)<br />

-0.0518 **<br />

(0.0238)<br />

(0.0132)<br />

0<br />

(0.0187)<br />

(0.0193)<br />

-0.0116<br />

(0.0273)


16dec2008 -0.0524 **<br />

0.0259<br />

(0.0233)<br />

***<br />

0.0305<br />

(0.00975)<br />

*<br />

0.0124 0.00850<br />

(0.0168) (0.0132) (0.0193)<br />

28jan2009 -0.00108 -0.00293 -0.00719 0.00263 0.00526<br />

(0.0233) (0.00975) (0.0168) (0.0132) (0.0193)<br />

18mar2009 0.0248 0.0219<br />

(0.0233)<br />

**<br />

0.0104 0.0187 0.0201<br />

(0.00975) (0.0168) (0.0132) (0.0193)<br />

12aug2009 0.00793 0.00567 0.0243 0.00677 0.00182<br />

(0.0233) (0.00975) (0.0168) (0.0132) (0.0193)<br />

23sept2009 -0.0420 *<br />

-0.00210 -0.0141 -0.00188 -0.00667<br />

(0.0233) (0.00975) (0.0168) (0.0132) (0.0193)<br />

4nov2009 0.000377 0.0130 0.0305<br />

(0.0233) (0.00975)<br />

*<br />

0.0127 0.0185<br />

(0.0168) (0.0132) (0.0193)<br />

10aug2010 -0.0211 0.00104 -0.00984 -0.00581 -0.0104<br />

(0.0233) (0.00975) (0.0168) (0.0132) (0.0193)<br />

21sept2010 -0.0124 0.00554 0.00405 0.00493 0.00649<br />

(0.0233) (0.00975) (0.0168) (0.0132) (0.0193)<br />

non-event -0.000795 -0.0000741 -0.000276 -0.000361 0.000780<br />

Panel B<br />

(0.00169) (0.000709) (0.00122) (0.000959) (0.00140)<br />

Cumulative Event Effect (CEE) -.137 *<br />

.064 **<br />

.029 .064 .033<br />

Ho: (CEE – non-event) = 0, p-value<br />

Panel C<br />

0.077 0.047 0.592 0.139 0.597<br />

Observations 398 398 398 398 398<br />

R 2<br />

0.0325 0.0377 0.0390 0.0147 0.00851<br />

Table 2: Standard errors in parentheses (* p < 0.10, ** p < 0.05, *** p < 0.01). The data frequency is daily, and the sample runs from 17<br />

September 2008 to 20 October 2010. Announcement effects, standard errors and test statistics are computed by running a regression of the daily<br />

change (between closing price today and closing price yesterday) in the log of the price in question on a mutually exclusive and exhaustive set of<br />

indicator variables (without a constant). For example 25nov2008 is an indicator variable that takes the value of 1 if the date is the FOMC<br />

announcement date 25 November 2008 or the next day 26 November 2008, and 0 otherwise. The estimates on the event indicators give the<br />

average daily price change in percentage points associated with the given event date, assuming that the announcement effect took place only on<br />

the day of the announcement and on the next day, i.e., two day event window. The indicator non-event takes the value of 1 if all the event<br />

indicators are 0, and gives the average daily price change in percentage points associated with dates on which no announcements occurred. The<br />

Cumulative Event Effect (CEE) is the sum of the estimated effects on all the event dates.<br />

Looking at Panel B of Table 2 we see that the cumulative effect of all the quantitative easing announcements by<br />

the FOMC is positive for precious metals, but very small in magnitude, and statistically significant only for gold.<br />

(Significance is measured by testing the null hypothesis that the cumulative announcement effect price change is the<br />

same as the average price change on non-announcement dates.) All the announcements by the FOMC resulted in<br />

only 6.4% higher price level as measured by the increase in the price of gold, which is a negligible short term effect.<br />

The FOMC announcements of quantitative easing resulted in lower price of oil in the short run. The cumulative<br />

announcement effect for the price of oil is -13.7% and it is significant at the 10% significance level.<br />

The negative effect for oil can probably be rationalized by recognizing that oil is a commodity heavily used in<br />

production and the US is a major world consumer of oil. When the FOMC announces quantitative easing, the<br />

markets realize that there will be more money chasing given amount of output (inflationary effect suggesting a<br />

positive cumulative impact on the price of oil), but also that the state of the US economy is bad, and therefore there<br />

will be less production and less demand for oil (deflationary effect suggesting a negative cumulative impact on the<br />

price of oil). Empirically it turns out that the second effect dominates.<br />

5 Summary<br />

I study the effects of quantitative easing engineered by the FED on the prices of internationally traded and dollar<br />

denominated commodities (oil) and precious metals (gold, silver, platinum and palladium). Sometime in September<br />

2008 the FED started historically unprecedented expansion of the US monetary base. Curiously this expansion has<br />

not resulted in any inflation so far, as measured by the US CPI index. Similarly the bond market for US government<br />

279


debt offers nominal returns close to 0, which suggests not only that the real interest rate is close to 0 but also that the<br />

bond holders expect that there will be no inflation in the years to come.<br />

I present estimates from finite distributed lag models which suggest that one percent increase in the US<br />

monetary base results in one percent increase in the prices of internationally traded and dollar denominated<br />

commodities and precious metals, as predicted by classical economic theory. The long run multipliers for oil, gold,<br />

silver, platinum and palladium are all about 1, estimated from feasible GLS regressions of the log of the price on the<br />

log of the monetary base. The long run multipliers are all significantly different from 0, and the null hypothesis that<br />

they are equal to 1 cannot be rejected at the 10% significance level. If we take the prices of oil and precious metals<br />

as legitimate measures of inflation, we see that the US is already in a state of very high inflation. Every percent<br />

increase in the monetary base has resulted in a percent increase in the prices of commodities and precious metals. In<br />

view of my findings, the behavior of the US debt bond market, and the US CPI index is puzzling.<br />

To complement the above evidence of long run price effects of quantitative easing by the FED, I present an<br />

event study of the price effects of quantitative easing announcements by the FOMC. I take the event window to be<br />

the day of the quantitative easing announcement and the next day. Here the price increases are small for precious<br />

metals ranging between cumulative price increase resulting from all the FOMC announcements of 2.9% for silver<br />

and 6.4% price increase for gold. The cumulative effect for oil is even negative 13.7%, i.e., the price of oil actually<br />

decreases on the quantitative easing announcement dates. These results probably indicate that the news that the<br />

markets read from the FOMC announcements of quantitative easing is that the FED believes that the US economy is<br />

in a very bad shape. Hence in terms of price effects of these announcements, the bad economic conditions outweigh<br />

the monetary expansion content. There is also clear evidence that the FOMC makes quantitative easing<br />

announcements long after monetary expansion has already taken place, thereby retroactively admitting what they<br />

have been doing, instead of disclosing their intentions for the future.<br />

Overall, the results here are consistent with the long/short term dichotomy in economics. I do not mean to imply<br />

that the FED should target commodities and precious metal prices when setting monetary policy (Mankiw & Reis,<br />

2003). The point here is that looking at commodities and precious metal prices one can see inflation that is otherwise<br />

invisible only upon looking at the US CPI and the bond market. The present study also suggests certain dissonance<br />

between the markets for US government bonds, and the markets for commodities and precious metals. The implied<br />

inflation expectations are vastly different. This raises the possibility that the money printing activities of the FED<br />

caught investors in US government debt asleep at the wheel.<br />

6 References<br />

Cochrane, John (2011). Understanding policy in the great recession: Some unpleasant fiscal arithmetic. European<br />

Economic Review, 55, 2–30.<br />

Cochrane, D., & G. H. Orcutt (1949). Application of least squares regression to relationships containing<br />

autocorrelated error terms. Journal of the American Statistical Association, 44(245), 32–61.<br />

Friedman, Milton (1970). Wincott Memorial Lecture, London, September 16, 1970.<br />

Gagnon, Joseph, Matthew Raskin, Julie Remache, & Brian Sack (2010). Large-Scale Asset Purchases by the Federal<br />

Reserve: Did They Work? Federal Reserve Bank of New York Staff Report no. 441, March.<br />

Kolev, Gueorgui I. (2011). The "spurious regression problem" in the classical regression model framework.<br />

Economics Bulletin, 31(1), 925-937.<br />

Krishnamurthy, Arvind, & Annette Vissing-Jorgensen (2011). The Effects of Quantitative Easing on Interest Rates.<br />

Brookings Papers on Economic Activity, forthcoming, Fall 2011.<br />

Mankiw, N. Gregory (2010). Macroeconomics, 7 th edition. Worth Publishers.<br />

Mankiw, N. Gregory, & Ricardo Reis (2003). What Measure of Inflation Should a Central Bank Target? Journal of<br />

the European Economic Association, 1(5), 1058–1086.<br />

McCallum, Bennett T. (2010). Is the spurious regression problem spurious? Economics Letters, 107(3), 321-323.<br />

Orphanides, Athanasios (2008). Taylor rules. The New Palgrave Dictionary of Economics, 2nd Edition. v. 8,. 200–<br />

04.<br />

Taylor, John (2009). The Need to return to a monetary framework. Business Economics, 44, 63 – 72.<br />

Wright, Jonathan H. (2011). What does Monetary Policy do to Long-Term Interest Rates at the Zero Lower Bound?<br />

Working paper, Department of Economics, Johns Hopkins University.<br />

280


AGRI-BUBBLES ABSORB CHEAP MONEY: THEORIES AND PRACTICE IN CONTEMPORARY<br />

WORLD<br />

Evdokimov Alexandre Ivanovich & Soboleva Olga Valerjevna,<br />

Saint-Petersburg State University of Economy and Finance, Russia<br />

Email: ier2002@mail.ru, prof-evdokimov-alex@yandex.ru, www.finec.ru<br />

Abstract. Agflation, biflation, stagflation, cycle development and world food crisis have become key words for economic discussions.<br />

Since 2002 stock markets booming accompanied by crude oil and following wheat and other grains prices’ rally compelled market<br />

analysts to reconsider core factors of upward trend and the role of speculators. In 2008 Global financial crisis was expected to correct<br />

speculative trend and burst market bubbles however, it did not happen. High volatility of world prices has been remained as yet both<br />

in capital and commodity markets. In summer 2010 world food prices began to grow rapidly and this trend continued in the beginning<br />

2011. That makes political efforts of overcoming the Global financial crisis barren. We examined stock and commodity markets’<br />

behavior and the fundamental reasons of upward trend. We have come to accept that current active growth of stock and commodity<br />

markets are not supported by real business activity but by Central Banks which follow ‘cheap money’ policy.<br />

Keywords: biflation, biofuels, bubble, ‘cheap money’ policy, global financial crisis, income-absorption theory, incomes parity, market<br />

equilibrium, the Relative Strength Index (RSI), speculative activity<br />

JEL classification: E2, E3, E5, F4, G01, G1, H5, I3.<br />

1 Introduction<br />

Since 2002 basic commodity prices grew actively with a slight correction for crude oil in 2003 and 2006, for wheat<br />

and corn in 2004. General trend for world commodity prices remained ascending till the beginning of the Global<br />

financial crisis in 2008. The main stock markets had four years of fluctuating but enthusiastic growth till the mid of<br />

2007. Referring to the Financial Times published data the Global financial crisis dropped stock market indexes to<br />

28% - 70% of their level recorded in 2007. World prices for commodities fell by 39% for wheat, 25% for corn, 54%<br />

for crude oil regards 2007. Many analysts predicted slow and uneven recovery of the world economy.<br />

Several reasons of the crisis are on agenda - financial globalization, convergence of banking and security<br />

exchange markets, increasing speculative activity, innovations in the market of derivatives such as structured<br />

investment vehicles (SIVs), or collateralized debt obligations (CDOs) that led to mortgage crisis in the USA (OECD<br />

Report, 2008). And finally more analysts agree that the Global financial crisis coincided with the end of<br />

Kondratiev’s business cycle. As Kondratiev’s cycles usually last 35-60 years, it is considered that the current<br />

recession will last for several years.<br />

However, in 2009 stock markets opposed pessimistic forecasts and showed 25-30% growth regards 2008,<br />

particularly, Russia’s stock market grew by 175% regards 2008. Oil prices recovered by 74% regards 2008, grains<br />

joined price rally in the mid 2010 - wheat month future price grew by 78%, corn – by 85%, oil – by 27% regards the<br />

end of 2009. Encountering with such extreme prices jumps the questions therefore naturally arise: whether the<br />

current soaring food commodity prices are the result of a speculative bubble or any confident fundamental reasons<br />

support ascending prices.<br />

For this paper we aim to provide our account for an up-to-date role of speculative activity in the stock and<br />

commodity markets, to summarize fundamental factors contributing to upward trend, and to propose the theoretical<br />

basis for food prices’ growth in the period of business cycle recession.<br />

2 Critical Analysis of Contemporary Food Commodity Prices Growth and it’s Theoretical Basis<br />

There are different approaches to the critical analysis of the current upward trend of the commodity prices but we<br />

deliberately simplify data analysis by calculating the Relative Strength Index (RSI), a technical indicator/oscillator<br />

which shows the overbought or the oversold moments for a certain commodity/stock item in exchange trade.<br />

Observing the behavior of price movements and RSI movements during 2001-2010 we tried to form an overview for<br />

281


the role of speculators in present days. The most promising approach for the research is to combine technical and<br />

fundamental analyses of price movements being outlined. For this purpose we used the report of the United States<br />

Department of Agriculture (USDA) which was published in 2008 (Ronald Trostle, 2008). In this report supply and<br />

demand factors which contributed to the recent increase in food commodity prices were investigated. However, we<br />

believe that in the circumstances of the Global financial crisis it is insufficient to use fundamental factors which are<br />

originated from global supply and demand. Conditions of financial markets and monetary systems of different<br />

countries are to be considered as important factors affecting commodity prices growth. So we expanded fundamental<br />

analysis by two theoretical approaches: aspiration of achieving incomes parity, and absorption of money surplus due<br />

to prices growth.<br />

2.1 Balancing Role of Market Speculators is Doubtful<br />

The principle assumption of balancing role of market speculators was accepted in the end of 1970s (Friedman M,<br />

1968), however in 1990s it was demolished by numerous analysts who investigated the behavior of market<br />

speculators in the low effective markets of developing and emerging countries. High effective markets are<br />

characterized as having high business transparency and information penetration. Earlier it was considered that the<br />

markets of developed countries had high effectiveness. Market speculators in the high effective markets act<br />

beforehand. Due to this all market operators could know about market signals and could undertake precautionary<br />

actions to reduce risks. For example, when the market price exceeds the level of market equilibrium, speculators<br />

begin to sell overbought item and their actions pull down the price to equilibrium level.<br />

For the last twenty years the balancing role of speculators has been abated both in the effective and ineffective<br />

markets. It was mentioned that ineffective markets suffer from speculative activity because speculators here do not<br />

adjust to market equilibrium but they form market trends themselves. They continue to buy overbought item with<br />

the hope that the market price will increase further. Their confidence in ascending prices is partially supported by<br />

generally high anticipations of inflation and by foreign investments inflows which may also contribute to inflation<br />

growth in developing countries. Since the end of 1980s as market agents began to develop in the result of<br />

consolidation in banking sector and other financial intermediaries financial globalization formed the worsen<br />

competitive environment in developed, developing and emerging markets because one or few big investors, for<br />

example, hedge funds, could affect market prices. Hence at present time speculative activity is not the problem for<br />

developing countries only. That is illustrated by RSI and price movements.<br />

RSI was offered in 1978 to indicate overbought and oversold moments. RSI oscillator is usually used in the<br />

complex with other technical indicators, volume data, support and resistance lines, directional movements, etc. The<br />

index normally varies in the range of 30-70. If RSI exceeds 30 or 70 level that is the technical signal of anticipated<br />

change of current trend – downturn in the case of exceeding 70 point, and vice-versa in the case of exceeding 30<br />

point. Market bubble is the term to characterize the overbought moment when numerous agents continue to buy<br />

stock item/commodity being encouraged by own positive anticipations and there is no resistance to the ascending<br />

trend. It is considered that bubbles burst (i.e. change towards rapid decrease) just after the moment when RSI<br />

exceeds 70. Though, it does not always happen.<br />

We analyzed RSI for wheat, crude oil and some stock markets (RTS Russia, FTSE 100, SP 500 indexes) and we<br />

found out that during 2001-2010 there were moments when markets did not switch towards downturn trend when<br />

RSI indicated overbought state. That happened with SP 500 in 2003, FTSE 100 in 2005, RTS in 2005-2006, wheat<br />

in 2006, and in July and December in 2010, with crude oil in 2009. The illustration for wheat price’s behavior in<br />

February 2010 - March 2011 is provided in Figure 1. In the moments when RSI exceeds 70 point market cool down<br />

but the trend does not change in direction like market agents pause waiting to additional trigger to support prices<br />

rally. Oppositely, when RSI reaches 30 point the market agents always switch to upward trend. This confirms the<br />

market readiness to accept high prices.<br />

We exercised linear regression for months’ indexes and month future prices published by Financial Times for<br />

the period of March 2001 – December 2010 and we outlined (Table 1): strong ties between FTSE 100 and SP 500<br />

indexes; strong ties between RTS and FTSE 100/SP 500 indexes; strong ties between oil future prices and wheat<br />

future prices; strong ties between RTS and oil /wheat prices; weak ties between wheat future prices and FTSE 100<br />

282


SP 500; middle ties between oil future prices and FTSE 100 /SP 500. It proves that mature financial markets such as<br />

London or New-York are less responsive for commodity price movements compared to young emerged markets<br />

such as Russia. That may be interpreted by the dominant role of speculative capital in ineffective markets and<br />

therefore irrationally formed market trends. Durbin-Watson statistics confirms the existence of a positive<br />

autocorrelation in the data raw that normally illustrates the current future prices’ dependence on previous market<br />

expectancies.<br />

During the sharp phase of the Global financial crisis market speculators were accused of their main role in<br />

forming high volatility of prices, bubbles and financial crises. Nevertheless, it could be partially accepted because<br />

we know the examples of stock market bubbles, such as IT/telecome papers crisis in 2002 in the USA, when there<br />

was no affection of bank sectors and general economic slowdown.<br />

close price, $ cents per<br />

bushel<br />

volume deals<br />

1000<br />

800<br />

600<br />

400<br />

200<br />

0<br />

1500000<br />

1000000<br />

500000<br />

Cutler's RSI<br />

0<br />

100<br />

80<br />

60<br />

40<br />

20<br />

0<br />

R (correlationcoefficient)<br />

Wheat Front Month Futures WC1:CBT<br />

Figure 1 Wheat price movements and month’s RSI<br />

R2 (determination<br />

coefficient)<br />

F- statistics<br />

empirical<br />

F- statistics table, P-<br />

0,05<br />

Feb 2010<br />

Mar 2010<br />

Apr 2010<br />

May 2010<br />

Jun 2010<br />

Jul 2010<br />

Aug 2010<br />

Sep 2010<br />

Oct 2010<br />

Nov 2010<br />

Dec 2010<br />

Jan 2011<br />

Feb 2011<br />

Mar 2011<br />

F-statistics results<br />

Durbin-Watson<br />

statistics<br />

t-statistics for<br />

constant<br />

t--statistics for<br />

independent<br />

variable<br />

t-statistics table, P-<br />

0,05<br />

FTSE 100 (y) - SP 500 (X) 0,924 0,853 679,3 19,49 significant 0,144 2,80 26,06 1,66 strong tie<br />

RTS (y) - SP 500 (X) 0,712 0,506 120,1 19,49 significant 0,075 -6,95 10,96 1,66 strong tie<br />

RTS (y) - FTSE 100 (X) 0,745 0,555 146 19,49 significant 0,081 -7,96 12,08 1,66 strong tie<br />

SP 500 (y) - Oil price (x) 0,519 0,27 43,21 19,49 significant 0,098 27,79 6,57 1,66 middle tie<br />

FTSE 100 (y) - Oil price (x) 0,569 0,323 55,94 19,49 significant 0,103 29,13 7,48 1,66 middle tie<br />

RTS (y) - Oil price (x) 0,891 0,795 453,1 19,49 significant 0,142 -3,12 21,29 1,66 strong tie<br />

SP 500 (y) - Wheat price (x) 0,39 0,152 20,94 19,49 significant 0,092 24,09 4,58 1,66 weak tie<br />

FTSE 100 (y) - wheat price (x) 0,431 0,186 26,76 19,49 significant 0,098 24,83 5,17 1,66 weak tie<br />

283<br />

Linear model results


R (correlationcoefficient)<br />

R2 (determination<br />

coefficient)<br />

F- statistics<br />

empirical<br />

F- statistics table, P-<br />

0,05<br />

F-statistics results<br />

Durbin-Watson<br />

statistics<br />

t-statistics for<br />

constant<br />

t--statistics for<br />

independent<br />

variable<br />

t-statistics table, P-<br />

0,05<br />

RTS (y) - wheat price (x) 0,78 0,608 181,5 19,49 significant 0,164 -1,86 13,47 1,66 strong tie<br />

Wheat price (y) - SP 500 (x) 0,39 0,152 20,94 19,49 significant 0,101 -0,07 4,58 1,66 weak tie<br />

Wheat price (y) - FTSE 100 (x) 0,431 0,186 26,76 19,49 significant 0,11 -0,67 5,17 1,66 weak tie<br />

Wheat price (y) - RTS (x) 0,78 0,608 181,5 19,49 significant 0,217 10,71 13,47 1,66 strong tie<br />

Wheat (y) - oil (x) 0,799 0,639 206,7 19,49 significant 0,269 5,36 14,38 1,66 strong tie<br />

Table 1: Results of linear single factor regression<br />

Summing-up we believe that the balancing role of market speculators have been changed since 1970s.<br />

Speculative activities do contribute to higher volatility of market prices and speculators are powerful to form upward<br />

trends and bubbles. However, the reason for prices rally is not only originated from trader’s psychology to pursuit<br />

short-term profitability. There must be some fundamental factors that support increasing willingness of market<br />

speculators to take additional risks.<br />

2.2 Fundamental Factors Contributing to Ascending Food Commodity Prices<br />

In 2008 Ronald Trostle and USDA’s team published a research paper of supply and demand factors that contributed<br />

to the increase in food commodity prices since 2006 till the financial crisis in 2008. The analysts divided factors into<br />

two groups. Demand factors are: increasing population, rapid economic growth and therefore rising purchasing<br />

power of developing countries, rising per capita meat consumption, declining demand for stocks of food<br />

commodities, rapid expansion of biofuels production, dollar devaluation, large foreign exchange reserves,<br />

aggressive purchases by importers. Supply factors are: slowing growth in agricultural production, escalating crude<br />

oil price, rising farm production costs, adverse weather. We tested topicality of those factors in the light of the<br />

consequences of the Global financial crisis.<br />

Increasing population is admitted a stable factor that will form the general higher demand on food in future<br />

hence, it will push food prices up. We do not doubt that adverse weather may cause disruption of food supply in any<br />

times of any governments. Slowing growth in agricultural production seems reasonable because most of developed<br />

and some emerging countries have already reached the maximum level of agricultural productivity while principally<br />

new advanced technologies are not available. Victor M. Polterovich (Polterovich V.M, 2009) asserts that<br />

biotechnologies and genetic engineering have not become an impulse for significant increase of agricultural<br />

productivity and profitability of farmers. The global strategy of development of agriculture capacity in middle and<br />

less developed countries stumbles on political, social instabilities in these countries that raise investment risks.<br />

Rapid economic growth and rising purchasing power of developing countries have not definitely formed yet.<br />

Most of economists forecast sooner recovery of developing countries after the Global financial crisis compared to<br />

developed countries. We accept the idea, however, we wonder whether the economic growth induces the same<br />

active growth of real incomes of the most of population in developing countries. Practically we found out that real<br />

incomes of population in developing countries increase less actively compared to national product growth.<br />

Economies of developing countries are currently dependent on their exports and purchasing capacity of developed<br />

countries, so if purchasing power of developed countries decreases that will affect incomes of developing countries<br />

as well. At the same time the living standards in developing countries, particularly agricultural exporters, remain<br />

extremely low and it will definitely take several decades to eliminate this divergence between developing and<br />

developed countries. Moreover, analysts must remember that when real incomes grow consumers tend to increase<br />

non-food purchases keeping conservative dietary pattern.<br />

284<br />

Linear model results


Escalating crude oil price is a strong factor for recent increasing popularity of biotechnologies and some<br />

changes of production policy of agricultural exporters. This factor has recently induced a rapid expansion of biofuels<br />

production in some countries, and therefore a shrink of food grains’ supply. Currently the level of areas for biofuel<br />

grains presents negligible share of the world’s areas so this factor is not fundamentally strong yet though it is<br />

alarming. Switching to biofuel grains production is a young trend, but market speculators triggered price rally<br />

beforehand. In addition the increase in energy prices affects farm production costs, particularly in developed<br />

countries which use energy consuming technologies. Energy prices growth will remain a strong factor contributing<br />

to ascending food commodity prices.<br />

Declining demand for stocks of food commodities is not considered as a factor of demand of the end consumers<br />

but is recognized as a result of changes of the governments’ policy aiming to reduce budget expenses for<br />

maintaining big stocks. In the recent report of European Commission Directorate-General for Agriculture and Rural<br />

Development (The Common Agricultural Policy Explained, 2008) in the chapter ‘A record to be proud of’ the<br />

reporters boasts of reducing level of public storage of cereals, beef and butter compared to 1985-1995 level.<br />

Aggressive purchases by importers, a factor mentioned by USDA analysts, are the result of shortage of food<br />

commodity stocks. Decline of stocks is a strong basis for anxious speculative activity and therefore increasing food<br />

prices volatility.<br />

Financial markets turmoil is not accented by USDA analysts in 2008. Though in their report the dollar<br />

devaluation and large foreign exchange reserves of developing countries are mentioned among demand factors that<br />

push food prices up. We consider that contemporary monetary policies of developed and developing countries lead<br />

to the large US dollar reserves nevertheless this is not the direct cause of the food prices growth. The only logical<br />

explanation for large US dollar reserves’ influence on the rapid price growth appears when we estimate the<br />

speculators’ confidence in future upward trend. These are trade speculators who include global money liquidity into<br />

expectations of high level of future prices. And finally, most of economists anticipate US dollar devaluation or even<br />

dollar crush, meantime since 2008 US dollar has depreciated gradually, not impressively. We conclude that the<br />

dollar’s moderate devaluation could not be a strong factor for rapid growth of food commodity prices in 2010.<br />

2.3 Some Theoretical Approaches for Long-Term Upward Trend of Food Commodity Prices<br />

When we collected analytical discussions of the reasons for rapid growth of food prices we elicited scarce<br />

theoretical basis for development of certain assumptions and forecasts. For example, fundamental factors of supply<br />

and demand which have been discussed above do not apply to economic theories except for market equilibrium.<br />

Market equilibrium can be difficult to attain in the case when exporting and importing countries form unpredictable<br />

food commodity stocks for future consumption as well as in the case of inflation spiral. So, the market price factors<br />

have to be investigated carefully with attention for market specifics and deviations.<br />

The Global financial crisis has already brought its fruitful results for the science as more economists examine<br />

the role of monetary policy for the economic development, and the ability of monetary authorities to decide<br />

economic cycle’s problems. The most advances have already been developing the idea of global equilibrium both in<br />

financial and business sectors. We agree that the processes of globalization contribute to the extent and the depth of<br />

the Global financial crisis, but we have not discovered any confident theory for global equilibrium. For our paper<br />

research we reconsidered two theoretical approaches pertaining to financial study that may explain the reasons of<br />

long-term upward trend of food commodity prices.<br />

2.3.1 Food Commodity Prices Growth Means to Correct Incomes Disparity<br />

The problem of incomes parity has been discussed for years since the Great Depression in 1930s (Stine O.C, 1937).<br />

In general view all groups of society involved in different economic activities in the country should be guaranteed of<br />

comparable satisfactory standards of living. Otherwise employees leave less profitable activity in order to get higher<br />

incomes in other industries (sectors of economy). Adam Smith’s laissez-faire principal confronts incomes parity as<br />

the market community itself can define the most profitable activities and other less profitable ones naturally degrade.<br />

However, in the XX century social principals of economics came on agenda of all economic formations, partly due<br />

to the achievements of the former USSR (Union of Soviet Socialist Republics). Incomes parity is the social<br />

285


theoretical approach of economics study. After the Second World War more countries admitted the role of<br />

government in distribution of incomes from the most lucrative to less succeeded business activities through the<br />

systems of support and social guarantees.<br />

In the post crisis period of 1930s the USA became a pioneer of intervention into food commodity markets and<br />

budgetary support of incomes in agriculture. Though the American system of incomes’ support was not ideal, the<br />

USA demonstrated the most effective system of inventing and reforming the instruments of support. In 1980s the<br />

USA actively lobbied the General Agreement on Tariffs and Trade, the GATT negotiations (currently World Trade<br />

Organization, WTO), towards reducing the level of agriculture support, appealing to reconsider the ability of<br />

farming to compete fairly. However, we have to emphasize that before liberal reforms American farmers had long<br />

history of fruitful years of high profits and strong government support. This has formed US farmers as confident<br />

competitors at present. Currently the incomes of US farmers are diversified significantly, only 12% of total incomes<br />

of an average farm family in the USA were attained due to agricultural activity in 2003-2008 (The 2008/2009 World<br />

Economic Crisis…., 2009).<br />

In the end of 1980s there was a period of the highest level of agriculture support in the developed countries.<br />

However, GATT/WTO agricultural agreements obliged member states to reduce support and to reform support<br />

instruments. Countries are gradually decreasing direct payments which are subjects to production volumes, areas,<br />

livestock quantity. Many developed countries currently pay fixed amounts to support farmer’s incomes, and<br />

calculations for such payments are usually based on historical level of money transfers. Such payments are not<br />

corrected according to the changes of average incomes of employees in other sectors of economy.<br />

Discussions about the methods of calculation of incomes parity are always on the agenda. At present there is no<br />

comparable statistics data on incomes per employee in agriculture across numerous countries. So, we decided to<br />

compare the constant cumulative growth of gross value added (GVA) in agriculture and other sectors (Figure 2).<br />

Data for graphs is available on http://data.un.org/Explorer.aspx. It is obvious that agriculture activity is less<br />

profitable/productive compared to other industries and sectors of economy. The same trends were observed for<br />

South Africa, Jordan, Algeria, Mexico, Poland, Canada, New Zeeland, Spain, Norway, France – not presenting in<br />

Figure 2. The disparity of GVA growth rates is more visible in the period since 1990s when IT, telecommunications,<br />

financial and other services became a core driver of economic growth.<br />

An emerging concept of ‘biflation’ introduced by F. Osborne Brown in 2003 could be accepted as a mean for<br />

achieving incomes parity. Biflation describes the situation when commodity-based assets’ prices increase while<br />

debt-based assets’ prices decrease. Debt-based goods are fancy houses, automobiles, luxury goods, etc. In the post<br />

crisis period all sectors of economy that produce debt-based goods will face shrinking demand. These sectors will<br />

record numerous bankruptcies and layoffs. On other hand consumers will continue to buy staples at any prices<br />

soaring, although it may induce social unrests all over the world, particularly in developing countries. The Global<br />

financial crisis exposed disbalance of overestimated services, debt-based goods, and underestimated value of food<br />

commodities and other basic industries of economy. So we suppose that the current rapid growth of food commodity<br />

prices reflects the implicit intention of economic agents to develop incomes of agricultural activity to the level of<br />

industrial and services sectors.<br />

2.3.2 Food Commodity Prices Growth Absorbs Cheap Money<br />

Income-absorption theory was introduced by Alexander S. in 1952 (Alexander S, 1953) to explain the effect of<br />

elimination of incomes surplus in the period of national currency devaluation and exports volumes growth. When<br />

the country pursues to develop export volumes the monetary authorities apply ‘cheap money’ policy. Interest rates,<br />

municipal bond’s rate and official banks’ reserve rates are reduced. In the result of such actions exchange rate of<br />

national currency decreases while foreign demand grow being attracted by cheaper exports from the country with<br />

devaluated currency. However, at the same time import prices increase and if the country has a significant portion of<br />

imported goods in the consumers’ basket, then both imported and domestic goods’ prices will grow. The growth of<br />

domestic prices absorbs the surplus of national incomes earned from exports development.<br />

286


cumulative growth index,<br />

const. prices<br />

cumulative growth index,<br />

const. prices<br />

3,5<br />

3<br />

2,5<br />

2<br />

1,5<br />

1<br />

0,5<br />

0<br />

1972<br />

1975<br />

8000<br />

7000<br />

6000<br />

5000<br />

4000<br />

3000<br />

2000<br />

1000<br />

0<br />

United States of America<br />

1978<br />

1981<br />

1984<br />

1987<br />

1990<br />

1993<br />

1996<br />

1999<br />

2002<br />

2005<br />

GVA in other sectors excluding agriculture<br />

GVA in agriculture, hunting, forestry, fishing<br />

Russia<br />

2008<br />

1992<br />

1994<br />

1996<br />

1998<br />

2000<br />

2002<br />

2004<br />

2006<br />

2008<br />

GVA in other sectors excluding agriculture<br />

GVA in agriculture, hunting, forestry, fishing<br />

Figure 2 Dynamics of Constant Growth of Gross Value Added<br />

Greece<br />

Income absorption may not happen in the case of a strict monetary policy when money liquidity is limited and<br />

controlled. However, strict monetary policy is not the case of contemporary world. Since 1990s many developed<br />

countries decreased the cost of money. Figure 3 illustrates the dynamics of money value (one year deposit rates) and<br />

wheat price constant growth (cumulative index). Data for calculation is available on http://data.un.org/Explorer.aspx<br />

and http://faostat.fao.org/site/570/DesktopDefault.aspx?PageID=570#ancor. The same trends were observed for<br />

South Africa, Jordan, Japan, Brazil, Argentina, Canada, Spain, France. There is no doubt that this is a global trend<br />

for money becoming cheaper. Partially it is supported by foreign investors who have access to cheap money in one<br />

country and can invest at higher income rates in other countries. Due to income absorption effect ‘Cheap money’<br />

policy become ineffective in stimulating business activity. In the period of economy recession when there is no<br />

principally new technology/product that could restart business cycle ‘cheap money’ policy produces inflation that<br />

absorb money surplus.<br />

We consider that present-day eagerness to accept higher prices have to be supported by some stronger<br />

macroeconomic factor than the psychology of trade speculators and consumers. Most of consumers, especially in<br />

developing countries, accept ascending prices. Such irrational behavior is described as a case of ‘ratchet effect’ –<br />

when prices reach their top level once consumers will anticipate further prices growth and these anticipations will<br />

bring on higher inflation. The desire to improve living standards and therefore to increase consumption wins over<br />

rational behavior and preference to savings. Here we emphasize that such irrational consumer’s behavior is actively<br />

supported by easy access to credits. For the last 15 years banks have switched to serve consumers as they admit to<br />

having higher margin from retail banking compared to corporate clients. The Global financial crisis initially shrank<br />

banks’ liquidity, but Central Banks began to support commercial banks injecting cheap money into heated markets.<br />

287<br />

cumulative growth index,<br />

const prices<br />

cumulative growth index,<br />

const prices<br />

4<br />

3,5<br />

3<br />

2,5<br />

2<br />

1,5<br />

1<br />

0,5<br />

0<br />

3<br />

2,5<br />

2<br />

1,5<br />

1<br />

0,5<br />

0<br />

1971<br />

1975<br />

1972<br />

1975<br />

1978<br />

1981<br />

Australia<br />

1984<br />

1987<br />

1979<br />

1983<br />

1987<br />

1991<br />

1995<br />

1999<br />

1990<br />

1993<br />

1996<br />

1999<br />

2002<br />

2003<br />

2007<br />

GVA in other sectors excluding agriculture<br />

GVA in agriculture, hunting, forestry, fishing<br />

GVA in other sectors excluding agriculture<br />

GVA in agriculture, hunting, forestry, fishing<br />

2005<br />

2008


deposit rate, %<br />

deposit rate,%<br />

18<br />

16<br />

14<br />

12<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

60,0<br />

50,0<br />

40,0<br />

30,0<br />

20,0<br />

10,0<br />

3 Summary<br />

1970<br />

1974<br />

1978<br />

United States of America<br />

1982<br />

1986<br />

1990<br />

1994<br />

1998<br />

2002<br />

2006<br />

deposit interest rate, annual<br />

wheat price index<br />

Russia<br />

0,0<br />

1996 1999 2002 2005 2008<br />

deposit interest rate, annual<br />

wheat price index<br />

6,0<br />

5,0<br />

4,0<br />

3,0<br />

2,0<br />

1,0<br />

0,0<br />

16,0<br />

14,0<br />

12,0<br />

10,0<br />

8,0<br />

6,0<br />

4,0<br />

2,0<br />

0,0<br />

wheat price cumulative<br />

index<br />

wheat price<br />

cumulative index<br />

Figure 3. Dynamics of Wheat Price Growth and Costs of Money<br />

Australia<br />

Political obligation of any government is to guarantee satisfactory standards of living. It is problematically to<br />

achieve in the environment of extreme world food prices growth. According to numerous analytical reviews the<br />

recent unrests in Northern Africa and Arabic countries were happened partly due to the food commodity prices rally<br />

commenced in summer 2010. Economists have to understand the current market environment and find the effective<br />

instruments to prevent or overcome negative market consequences.<br />

Our brief investigation has brought us to the general conclusion that the current economic and financial<br />

environment form favorable conditions for food prices continuing growth in future. Though theories and empirical<br />

observations verify the upward prices trend, it is the governments’ responsibility to influence on the character of<br />

such growth. Incomes disparity and cheap money absorption call for a new approach to the governmental<br />

regulations in XXI century. Wise governments should combine social aims together with the exigent requirements<br />

for financial markets behavior. Appetite of portfolio investors for short-term high revenues should be restrained and<br />

the governments should develop regulative means in the framework for long-term strategic development.<br />

288<br />

deposit rate,%<br />

deposit rate, %<br />

20,0<br />

15,0<br />

10,0<br />

5,0<br />

0,0<br />

25,0<br />

20,0<br />

15,0<br />

10,0<br />

5,0<br />

0,0<br />

1970<br />

1974<br />

1978<br />

1982<br />

1970<br />

1974<br />

1978<br />

1982<br />

1986<br />

1990<br />

1994<br />

1998<br />

2002<br />

2006<br />

deposit interest rate, annual<br />

wheat price index<br />

1986<br />

Greece<br />

1990<br />

1994<br />

1998<br />

2002<br />

2006<br />

deposit interest rate, annual<br />

wheat price index<br />

10,0<br />

8,0<br />

wheat price<br />

cumulative index<br />

6,0<br />

4,0<br />

2,0<br />

0,0<br />

35,0<br />

30,0<br />

25,0<br />

20,0<br />

15,0<br />

10,0<br />

5,0<br />

0,0<br />

wheat price<br />

cumulative index


4 References<br />

Alexander S. Effects of a Devaluation on a Trade Balance IMF Staff Papers. 1952. Vol. 2 Apr. (pp. 263-278)<br />

Financial Market Highlights – May 2008: The Recent Financial Market Turmoil, Contagion Risks and Policy<br />

Responses. Financial Market Trends, No. 94, <strong>Volume</strong> 2008/1 – June 2008, (pp. 20). //Url:<br />

http://www.oecd.org/dataoecd/55/51/40850026.pdf<br />

Friedman M. The Case for Flexible Exchange Rates. Readings in International Economics /Ed. R.E. Caves, H.G.<br />

Johnson, Homewood (III.), 1968. (pp. 426)<br />

Polterovich Vitor M. Mechanism of Global Economic Crisis and Problems of Technologies Modernization. The<br />

Journal of New Economic Association, March 2009. //Url: http://www.econorus.org/sub.phtml?id=21<br />

Ronald Trostle Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food<br />

Commodity Prices. A Report from the Economic Research Service USDA, May 2008, (pp. 30). //Url:<br />

www.ers.usda.gov<br />

Stine O.C. Income Parity for Agriculture. The Conference on Research in Income and Wealth. Studies in Income<br />

and Wealth, <strong>Volume</strong> 1, Part 8. (pp. 317-345) //Url: http://www.nber.org/chapters/c8143<br />

The Common Agricultural Policy Explained. European Commission Directorate-General for Agriculture and Rural<br />

Development, 2008, (pp. 20) //Url: http://ec.europa.eu/agriculture/<br />

The 2008/2009 World Economic Crisis What It Means for U.S. Agriculture, A Report from the Economic Research<br />

Service USDA, March, 2009, (pp. 30) //Url: http://www.ers.usda.gov/Publications/WRS0902/<br />

289


THE INFORMATION CONTENT OF IMPLIED VOLATILITY IN THE CRUDE OIL FUTURES MARKET<br />

Asyl Bakanova<br />

University of Lugano and Swiss Finance Institute<br />

E-mail: asyl.bakanova@usi.ch<br />

Abstract. In this paper, we evaluate the information content of an option-implied volatility of the light, sweet crude oil futures traded at<br />

New York Mercantile Exchange (NYMEX). This measure of volatility is calculated using model-free methodology that is independent<br />

from any option pricing model. We do find that the option prices contain important information for predicting future realized volatility.<br />

We also find that implied volatility outperforms historical volatility as a predictor of future realized volatility and subsumes all information<br />

contained in historical data.<br />

Keywords: Volatility forecasts, implied volatility, extreme value, crude oil.<br />

1. Introduction<br />

Financial market volatility plays a very important role in the theory and practice of asset pricing, risk management,<br />

portfolio selection and hedging. Because of its importance, both market participants and financial academics have long<br />

been interested in estimating and predicting future volatility.<br />

Volatility models that fall into one of two categories, the ARCH family and the stochastic volatility family, have<br />

been commonly used in modeling volatility for estimation and forecasting. These models are based on historical data.<br />

Recently there has been a growing interest in extracting volatility from prices of options. This is because if markets are<br />

efficient and the option pricing model is correct, then the implied volatility calculated from option prices should be an<br />

unbiased and efficient estimator of future realized volatility, that is, it should correctly subsume information contained<br />

in all other variables including the asset's price history.<br />

The hypothesis that implied volatility (IV) is a rational forecast of subsequently realized volatility (RV) has been<br />

frequently tested in the literature 1 . Empirical research across countries and markets so far has failed to provide a<br />

definitive answer. Early research on the predictive content of IV found that IV explains variation in future volatilities<br />

better than historical volatility (HV) (see, for example, Latane and Rendleman (1976), Chiras and Manaster (1978),<br />

Schmalensee and Trippi (1978) and Beckers (1981)).<br />

In subsequent research, Kumar and Shastri (1990), Randolph et al. (1991), Day and Lewis (1992), Lamoureuax and<br />

Lastrapes (1993), and Canina and Figlewski (1993) found that IV is a poor forecast of the subsequently RV over the<br />

remaining life of the option. Specifically, Day and Lewis (1992) and Lamoureux and Lastrapes (1993) find that IV has<br />

some predictive power, but that GARCH and/or HV improve this predictive power and Canina and Figlewski (1993)<br />

show the absence of correlation between IV and future RV over the remaining life of the option.<br />

But the findings in the papers above are subject to a few problems in their research designs, such as maturity<br />

mismatch and/or overlapping samples, among others. Overcoming these problems, more recent papers (e.g., Jorion<br />

(1995), Fleming (1998), Moraux et al. (1999), Bates (2000), Blair et al. (2001), Simon (2003), Corrado and Miller<br />

(2005)) confirm that IV still outperforms other volatility measures in forecasting future volatility, although there is<br />

some evidence that it is a biased forecast. Christensen and Prabhala (1998), using monthly non-overlapping data, find<br />

that IV in at-the-money one-month OEX call options is an unbiased and efficient forecast of ex-post RV after the 1987<br />

stock market crash.<br />

Szakmary et al. (2003) find that for a large majority of the 35 futures options markets IV, though not a completely<br />

unbiased predictor of future volatility, outperforms the HV as a predictor of future volatility, and that HV is subsumed<br />

by IV for most of the 35 markets examined. IV from options written on crude oil futures was examined by Day and<br />

Lewis (1993). They compare it to a simple HV and out-of-sample GARCH volatility forecasts and find extremely good<br />

forecasting performance for IV in this market. Although the results are somewhat mixed, the overall opinion seems to<br />

be that IV has predictive power and therefore is a useful measure of expected future volatility. However, given the<br />

equivocal results and conclusions across different options markets, it is clear that further research on the predictive<br />

power of IV is needed.<br />

Most of these studies have focused on individual stocks and stock indices, bonds, and currencies. In this paper we<br />

analyze the implied volatility in the light, sweet crude oil market, which deserves an attention for a number of reasons.<br />

1 Poon and Granger (2003) provide an extensive review of the literature on volatility forecasting.<br />

290


Crude oil is the biggest and most widely traded commodity market in the world and the light, sweet crude oil futures<br />

contract is the world's most liquid and largest-volume futures contract trading on a physical commodity. Since one of<br />

the characteristics of prices in the oil markets is volatility, this market is a very promising area for testing volatility<br />

models.<br />

In particular, we test whether the IV is a better predictor of future RV and whether it reveals incremental<br />

information beyond that contained in historical returns. We calculate IV of light, sweet crude oil futures from options<br />

prices based on the concept of the fair value of future variance that appeared first in Dupire (1994) and Neuberger<br />

(1994) and was improved further by various researchers (e.g., Carr and Madan (1998), Demeterfi et al. (1999), Britten-<br />

Jones and Neuberger (2000), Carr and Wu (2006) and Jiang and Tian (2005)). This measure is calculated directly from<br />

market observables, such as the market prices of options and interest rates, independent of any pricing model. Thus the<br />

measurement error resulting from model misspecification is reduced.<br />

As volatility proxy, we use the range-based -- or extreme value -- estimators proposed separately by Garman and<br />

Klass (1980), Parkinson (1980), Rogers and Satchell (1991), and Yang and Zhang (2000). According to Alizadeh,<br />

Brandt and Diebold (2002), the log range is nearly Gaussian, more robust to microstructure noise and much less noisy<br />

than alternative volatility measures such as log absolute or squared returns. In addition, these estimators require the<br />

daily open, close, high and low price data that are readily available for most financial markets. To our knowledge, this<br />

is the first study to apply model-free methodology for implied volatility and the range-based estimators in the light,<br />

sweet crude oil futures market.<br />

Our findings can be summarized as follows. We find strong indications that the implied volatility obtained from<br />

option prices, though slightly biased, indeed contains important information for predicting realized volatility at a<br />

monthly frequency. It is also significant in the multiple regression where historical volatility is included, which means<br />

that implied volatility subsumes the information content of historical volatility. The performance of option price based<br />

predictions of future volatility is substantially improved by applying the instrumental variable approach to correct for<br />

error in the predicted volatility variable. Finally, we find that implied volatility has better predictive power during more<br />

volatile subperiod following September 11 terrorist attacks.<br />

The paper proceeds as follows. Section 2 describes the data. Section 3 presents the methodology used for<br />

construction of implied volatility, extreme value estimators and analysis of the information content of implied volatility.<br />

Section 4 describes statistical properties of implied and realized volatilities the results for the forecasting performance<br />

of the volatility index in terms of future realized volatility. In Section 5 we conduct robustness analysis. The<br />

conclusions drawn from this study are presented in Section 6.<br />

2. Data description<br />

The dataset for this study contains daily time series of light, sweet crude oil futures and American-style options written<br />

on these futures which are both traded on the New York Mercantile Exchange (NYMEX) for the period from January<br />

02, 1996 through December 14, 2006. 2 On the NYMEX, futures and futures options are traded on the same floor, which<br />

facilitates hedging, arbitrage, and speculation. Moreover, both markets close at the same time and their prices are<br />

observed simultaneously which reduces the non-synchronicity biases and other measurement errors.<br />

We consider only options at the two nearest maturities. When the time to the nearest maturity is less than seven<br />

calendar days, the next two nearest maturities are used. We match all puts and calls by trading date, maturity, and strike.<br />

For each pair, we drop strikes for which put/call price is less than $0.01. 3 Generally, a large number of options meet<br />

these selection criteria. Since the options we use are American type, their prices could be slightly higher than prices of<br />

the corresponding European options.<br />

The futures high, low, open and closing prices are taken from the corresponding nearest futures contracts. We use<br />

only the futures contracts with the same contract months as options to ensure the best match between the implied<br />

volatility and the realized volatility calculated from subsequent futures prices. To eliminate any effect at the time of<br />

rollover, we compute daily futures returns using only price data from the identical contract. Therefore, on the day of<br />

rollover, we gather futures prices for both the nearby and first-deferred contracts, so that the daily return on the day<br />

after rollover is measured with the same contract month. For the proxy of the risk-free interest rate, we use the rates of<br />

the Treasury bill that expires closest to the option expiration date.<br />

2<br />

NYMEX does list European-style options. However, the trading history is much shorter and liquidity is much lower than for the American-style<br />

options.<br />

3<br />

The reason for requiring option prices to exceed the given thresholds is that crude oil options are quoted with a precision of 0.01 USD.<br />

291


Figure 1 plots the closing prices of the futures data. Figure 2 plots the returns series actually used in this study, which is<br />

of the futures data.<br />

Table 1 reports summary statistics for daily returns. Crude oil futures returns conform to several stylized facts<br />

which have been extensively documented, such as fat tails and excess kurtosis.<br />

3. Methodology<br />

3.1. Range-based volatility estimators<br />

Table 1. Descriptive statistics<br />

Statistic Returns<br />

Mean<br />

Std. Dev.<br />

Skewness<br />

Kurtosis<br />

Jarque-Bera<br />

P-value<br />

0.043<br />

2.216<br />

-0.296<br />

5.250<br />

612.04<br />

0.000<br />

Figure 1. Closing prices of light, sweet crude oil prices<br />

Figure 2. Returns of light, sweet crude oil futures<br />

The idea of using information on daily high and low prices, as well as the opening and closing prices, goes back to<br />

Parkinson (1980) and Garman and Klass (1980) with further contributions by Beckers (1983), Ball and Torous (1984),<br />

Wiggins (1991), Rogers and Satchell (1991), Kunitomo (1992), Yang and Zhang (2000).<br />

Define the following variables: the opening price of the trading day t, the closing price of the trading<br />

day t, the highest price of the trading day t, the lowest price of the trading day t. Traditionally, the<br />

unconditional realized volatility of asset returns has been estimated using the series of closing prices as the daily<br />

squared return:<br />

292


(1)<br />

However, using closing prices alone to estimate volatility ignores information contained in intraday high and low<br />

prices. Assuming an underlying geometric Brownian motion with no drift, Parkinson (1980) developed an estimator<br />

which uses the daily high and low prices of the asset for estimating its volatility:<br />

(2)<br />

Since the price path is not observable when the market is closed, Garman and Klass (1980) suggest a method to<br />

mitigate the effect of discontinuous observations by including opening and closing prices along with the highest and<br />

lowest prices as follows:<br />

(3)<br />

Rogers and Satchell (1991) relaxed the assumption of no drift and proposed an estimator which is given by:<br />

(4)<br />

Finally, Yang and Zhang (2000) proposed new improvements by presenting an estimator that is independent of any<br />

drift and consistent in the presence of opening price jumps. This estimator can be interpreted as a weighted average of<br />

the Rogers and Satchell (1991) estimator, the close-open volatility and the open-close volatility with the weights chosen<br />

to minimize the variance of estimator:<br />

(5)<br />

with:<br />

where , , and is the Rogers-Satchell (1991) estimator.<br />

3.2. Model-free implied volatility<br />

Britten-Jones and Neuberger (2000) proposed an implied volatility measure derived entirely from no-arbitrage condition<br />

rather than from any specific model and calculated directly from market observables, independent of any pricing model.<br />

In addition, it incorporates information from the volatility skew by using a wider range of strike prices rather just in-themoney<br />

options.<br />

This methodology, based on the theory for variance swap contracts and with pricing obtained from the full crosssection<br />

of options prices, has been adopted by the Chicago Board of Exchange (CBOE) in constructing the monthly<br />

implied volatility index, known as VIX, from on S&P500 index option prices. Following the VIX methodology with<br />

some modifications, we calculate implied volatility using the following equation:<br />

(6)<br />

where: is the difference between strike prices defined as ; Ki is the strike price of the i-th<br />

out-of-the-money option (a call if Ki > F and a put otherwise); Q(Ki,T) is the settlement price of the option with strike<br />

price Ki; F is a forward index level derived from the nearest to the money option prices by using put-call parity, such<br />

that ; K0 is the first strike below the forward index level F; T is expiration date<br />

for all the options involved in this calculation; r is the risk free rate to expiration.<br />

is calculated at two of the nearest maturities of the available options, T1 and T2. Then, we interpolate between<br />

and to obtain an estimate at 30-day maturity:<br />

(7)<br />

where and denote the number of actual days to expiration for the two maturities.<br />

3.3. The information content of implied volatility<br />

IV has been regarded as an unbiased expectation of the RV under the assumption that the market is informationally<br />

efficient and the option pricing model is specified correctly. Consistent with the previous literature, to test whether the<br />

293


implied volatility index has a significant amount of information over the historical volatility, we examine the following<br />

three hypotheses:<br />

H1. Implied volatility is an unbiased estimator of the future realized volatility.<br />

H2. Implied volatility has more explanatory power than the historical volatility in forecasting realized volatility.<br />

H3. Implied volatility efficiently incorporates all information regarding future volatility; historical volatility contains no<br />

information beyond what is already included in implied volatility.<br />

To test the above hypotheses, we use the following regression models commonly used in the literature:<br />

(8)<br />

(9)<br />

(10)<br />

If, as our hypothesis H1 earlier stated, IV is an unbiased predictor of the RV, we should expect and<br />

in regression (8). Moreover, if implied volatility is efficient, the residuals from regression (8) should be<br />

white noise and uncorrelated with any variable in the market's information set. If, in accordance with hypothesis H2, IV<br />

includes more information (i.e., current market information) than HV, then IV should have greater explanatory power<br />

than HV, and we would expect a higher R 2 from regression (8) than regression (9). Finally, if hypothesis H3 is correct,<br />

then when IV and HV appear in the same regression, as in (10), we would expect since HV should have no<br />

explanatory power beyond that already contained in IV.<br />

For the analysis on the information content of implied volatility, we use non-overlapping observations by<br />

computing realized volatility separately for each calendar month, following the example of Christensen and Prabhala<br />

(1998), since non-overlapping data results in more robust econometric findings.<br />

4. Empirical results<br />

Figure 3 shows the daily level of the implied volatility for the entire sample. In Figure 4 we present the various<br />

estimates of daily realized volatility. The peaks of these estimates are approximately synchronous, but the general<br />

behavior of the series differs, both in the range of variances and persistence phenomenon. Estimators using range data<br />

are less volatile than the classical estimator. The Augmented Dickey-Fuller test strongly rejects the presence of a unit<br />

root in all the series.<br />

Descriptive statistics for the levels and logarithms of both realized and implied volatilities are provided in Table 2<br />

for the entire sample in Panel A, for the first subperiod January 1996 through September 2001 in Panel B, and for the<br />

second subperiod October 2001 through December 2006 - in Panel C. Both average implied volatility and average log<br />

implied volatility exceed the means of the corresponding realized volatility.<br />

Figure 3. Model-free implied volatility at daily frequency<br />

294


RV C<br />

RV P<br />

Panel A: Full period - 01/1996 to 12/1996<br />

Mean<br />

St. Dev.<br />

Kurtosis<br />

Skewness<br />

0.335<br />

0.090<br />

6.040<br />

1.489<br />

0.315<br />

0.068<br />

5.988<br />

1.335<br />

Panel B: Subperiod 01/1996 to 09/2001<br />

Mean<br />

St. Dev.<br />

Kurtosis<br />

Skewness<br />

0.344<br />

0.092<br />

5.509<br />

1.252<br />

0.318<br />

0.066<br />

4.165<br />

0.852<br />

Panel C: Subperiod 10/2001 to 12/2006<br />

Mean<br />

St. Dev.<br />

Kurtosis<br />

Skewness<br />

0.326<br />

0.088<br />

7.463<br />

1.849<br />

logRV C<br />

0.312<br />

0.071<br />

7.906<br />

1.802<br />

logRV P<br />

Panel A: Full period - 01/1996 to 12/1996<br />

Mean<br />

St. Dev.<br />

Kurtosis<br />

Skewness<br />

-0.488<br />

0.106<br />

3.673<br />

0.627<br />

-0.511<br />

0.088<br />

3.779<br />

0.536<br />

Panel B: Subperiod 01/1996 to 09/2001<br />

Mean<br />

St. Dev.<br />

Kurtosis<br />

Skewness<br />

-0.477<br />

0.109<br />

3.466<br />

0.368<br />

-0.507<br />

0.087<br />

3.298<br />

0.218<br />

Panel C: Subperiod 10/2001 to 12/2006<br />

Mean<br />

St. Dev.<br />

Kurtosis<br />

Skewness<br />

-0.500<br />

0.103<br />

4.414<br />

0.963<br />

-0.515<br />

0.089<br />

4.682<br />

0.888<br />

Table 2. Descriptive statistics<br />

RV GK<br />

0.328<br />

0.074<br />

7.066<br />

1.575<br />

0.328<br />

0.071<br />

4.385<br />

0.954<br />

0.326<br />

0.077<br />

9.739<br />

2.168<br />

logRV GK<br />

-0.494<br />

0.089<br />

4.170<br />

0.683<br />

-0.493<br />

0.091<br />

3.240<br />

0.291<br />

-0.496<br />

0.089<br />

5.568<br />

1.162<br />

RV RS<br />

0.330<br />

0.078<br />

8.149<br />

1.766<br />

0.331<br />

0.076<br />

5.065<br />

1.071<br />

0.329<br />

0.081<br />

11.306<br />

2.440<br />

logRV RS<br />

-0.492<br />

0.093<br />

4.492<br />

0.746<br />

-0.491<br />

0.095<br />

3.383<br />

0.305<br />

-0.493<br />

0.091<br />

6.259<br />

1.328<br />

Figure 4. Daily level of realized volatilities<br />

RV YZ<br />

0.307<br />

0.077<br />

5.574<br />

1.898<br />

0.309<br />

0.075<br />

5.374<br />

1.221<br />

0.306<br />

0.079<br />

9.207<br />

2.604<br />

logRV YZ<br />

-0.524<br />

0.097<br />

4.666<br />

0.838<br />

-0.522<br />

0.100<br />

3.513<br />

0.408<br />

-0.526<br />

0.094<br />

6.645<br />

1.424<br />

IV<br />

0.372<br />

0.072<br />

4.367<br />

0.983<br />

0.354<br />

0.059<br />

3.599<br />

0.589<br />

0.392<br />

0.079<br />

3.807<br />

0.951<br />

logIV<br />

-0.437<br />

0.080<br />

3.126<br />

0.412<br />

-0.457<br />

0.072<br />

2.851<br />

0.115<br />

-0.415<br />

0.084<br />

2.819<br />

0.477<br />

Table 3 reports the ordinary least-square estimates for regressions (8) - (10). From the first regression it is seen that<br />

implied volatility does contain information about realized volatility. However, we cannot conclude that the logarithm of<br />

implied volatility is an unbiased estimator of realized volatility. The coefficient is 0.6722 and is significantly different<br />

from zero, but also significantly less than unity although the intercept is statistically not different from zero at 5%<br />

significance level. An F-test rejects the joint hypothesis and at 1% significance level.<br />

This conclusion is found to be robust across a variety of asset markets (see Neeley (2004)) and has thus provided<br />

the motivation for several attempted explanations of this common finding. As Christensen and Prabhala (1998)<br />

suggests, the results may be affected by errors in variables (EIV) which induces a bias in both slope coefficients.<br />

295


Consistent estimation in presence of the possible errors in variables problem may be achieved using an instrumental<br />

variable method that we present in the next section.<br />

Despite its biasedness, implied volatility remains a better predictor than past realized volatility. Indeed, taken alone,<br />

historical volatility is statistically significant, 0.5001, but its predictive power is quite inferior to the implied volatility<br />

predictive power. If we put in the same regression implied volatility and past volatility, we obtain interesting results.<br />

We see that the slope coefficient for implied volatility remains statistically significant in the multiple regression. The<br />

coefficient on historical volatility decreases strongly and is insignificant, which indicates that it does not contain<br />

information beyond that in implied volatility. It is sufficiently precise that it subsumes the information content of<br />

historical volatility.<br />

Table 3. Information content of implied volatility: OLS estimates<br />

Intercept IV HV Adj. R 2<br />

0.1151<br />

(0.044)<br />

0.2076<br />

(0.0352)<br />

0.1132<br />

(0.0397)<br />

5. Robustness analysis<br />

0.6722<br />

(0.1145)<br />

-<br />

0.6547<br />

(0.2167)<br />

-<br />

0.5001<br />

(0.0945)<br />

0.0266<br />

(0.1981)<br />

0.2561<br />

0.1246<br />

0.2504<br />

Wald test Durbin<br />

Watson<br />

–<br />

0.005 2.0385<br />

We perform several exercises to verify the robustness of our results. We evaluate the impacts of error-in-variable<br />

problems on our regressions where the implied volatility is used as a regressor and we analyze whether the information<br />

efficiency of implied volatility varies significantly over different subsample periods.<br />

5.1. Instrumental variable<br />

The fact that the slope estimate is significantly below the null hypothesis of one could be either due to implied volatility<br />

being a biased forecast or due to the bias induced by the error-in-variable problem. Christensen and Prabhala (1998)<br />

assume that the error-in-variables (EIV) problem causes implied volatility to appear both biased and inefficient 4 and<br />

propose to use instrumental variable framework as a way of correcting EIV problems in OEX implied volatility.<br />

Within this framework, to correct for EIV, the following equations are used:<br />

(11)<br />

(12)<br />

Under this procedure, implied volatility is first regressed on an instrument , which is correlated with true<br />

implied volatility at time t but is not correlated with the measurement error associated with implied volatility sampled<br />

one month later. With as the instrument, we estimate regression (11) using OLS. Then we reestimate<br />

specifications (8) - (10) by replacing implied volatility, , with fitted values from the regression (11). We use the<br />

same procedure for specification (10). First, we regress on both and and use the fitted values of<br />

from this regression for specification (10).<br />

Table 4 reports estimates based on this IV Procedure. Panel A reports estimates of the first-step regressions (11) and<br />

(12), while Panel B reports the estimates of (8) and (10). The estimates in Panel B provide evidence that implied<br />

volatility is much less biased and efficient. The point estimates of β1 in both specifications (8) and (10) are 0.892 and<br />

0.824, respectively. Also the IV estimate of β2 is not significantly different from zero, indicating that implied volatility<br />

is efficient.<br />

4 The EIV problem has two effects. It generates a downward bias for the slope coefficient of implied volatility and the an upward bias for the slope<br />

coefficient of past volatility, which explains the underestimation of implied volatility and the over estimation of past volatility. As a result, the usual<br />

OLS will lead to false conclusions concerning implied volatility predictive power.<br />

296<br />

0.000<br />

0.004<br />

2.147<br />

2.068


Panel A: first stage regressions estimates<br />

Dependent variable:<br />

Table 4. Information content of implied volatility: Instrumental variables estimates<br />

Intercept IV HV Adj. R 2<br />

0.128<br />

(0.000)<br />

0.1333<br />

(0.000)<br />

0.656<br />

(0.000)<br />

0.747<br />

(0.000)<br />

Panel B: second stage IV estimates<br />

Dependent variable:<br />

-<br />

-0.127<br />

(0.123)<br />

0.428<br />

0.434<br />

Intercept IV HV Adj. R 2<br />

0.02<br />

(0.666)<br />

-0.02<br />

(0.683)<br />

5.2. Subperiod analysis<br />

0.88<br />

(0.000)<br />

0.83<br />

(0.000)<br />

-<br />

0.06<br />

(0.481)<br />

0.29<br />

0.29<br />

Durbin-Watson<br />

2.04<br />

2.17<br />

Durbin-Watson<br />

2.00<br />

To control for the time period effect, we reestimate specifications (8) and (10) separately for two separate subperiods:<br />

pre-September 11 subperiod (January 1996 to August 2001) and a post-September 11 subperiod (October 2001 to<br />

December 2006). Panel A of Table 5 reports estimates for the first subperiod and Panel B for the second subperiod.<br />

Dependent variable:<br />

Panel A: 01/1996 – 08/2001<br />

Table 5: Information content of implied volatility: subperiod analysis<br />

Intercept IV HV Adj. R 2<br />

0.117<br />

(0.046)<br />

0.244<br />

(0.000)<br />

0.108<br />

(0.072)<br />

0.701<br />

(0.000)<br />

-<br />

-<br />

0.22<br />

0.333<br />

(0.008)<br />

0.09<br />

0.631<br />

0.092<br />

0.21<br />

(0.002)<br />

Panel B: 10/2001-12/2006<br />

(0.500)<br />

Intercept IV HV Adj. R 2<br />

0.037<br />

0.822<br />

-<br />

0.48<br />

(0.395)<br />

(0.000)<br />

0.217<br />

-<br />

0.390<br />

0.18<br />

(0.000)<br />

(0.000)<br />

0.038<br />

0.907<br />

-0.092<br />

0.48<br />

(0.389)<br />

(0.000)<br />

(0.429)<br />

2.14<br />

Durbin-Watson<br />

2.04<br />

2.15<br />

2.17<br />

Durbin-Watson<br />

2.04<br />

The estimate of the slope coefficient for implied volatility in the first subperiod is smaller than corresponding estimates<br />

in the second subperiod. These results suggest that there was a regime shift with implied volatility becoming less biased<br />

after the attacks.<br />

6. Conclusion<br />

In this paper, we construct a model-free implied volatility from the options on light, sweet crude oil futures and<br />

different measures for realized volatility for these futures. The main question we address is whether volatility implied<br />

by the option prices predicts future realized volatility. We find that implied volatility does predict future realized<br />

volatility alone as well as with the past volatility. We also find that historical volatility does not add any information<br />

beyond that in implied volatility. Hence, we cannot reject the hypothesis that the volatility implied by option prices is an<br />

efficient, although it is slightly biased estimator of realized volatility.<br />

The implied volatility appears to be less biased and more efficient once we account for error-in-variables and apply<br />

instrumental variable approach. This result provides support for the use of option pricing theory even for light, sweet<br />

crude oil options. We also find that in the light, sweet crude oil market, the implied volatility performs better during the<br />

more volatile period following September 11 terrorist attacks.<br />

297<br />

2.15<br />

2.17


The main issue that deserves attention in the future research is to understand whether in the crude oil market the<br />

bias is caused by OLS estimation method giving biased parameter estimates or by inefficiency of the options market,<br />

i.e. implied volatility being an inefficient forecast of future volatility.<br />

7. References<br />

Alizadeh, S., M.W. Brandt and F.X. Diebold (2002). Range-based estimation of stochastic volatility models, Journal of<br />

Finance, 57(3), 1047-1092.<br />

Ball, C.A., and W.N. Torous (1984). The maximum likelihood estimation of security price volatility: Theory, evidence,<br />

and application to option pricing, Journal of Business, 57, 97–112.<br />

Bates, D.S. (2000). Post-87 crash fears in S&P 500 futures options, Journal of Econometrics, 94, 181-238.<br />

Beckers, S. (1981). Standard Deviations Implied in Option Prices as Predictors of Future Stock Price Volatility, Journal<br />

of Banking and Finance, 5, 363-381.<br />

Black, F., and M. Scholes (1973). The Pricing of Options and Corporate Liabilities, Journal of Political Economy, 81,<br />

637-659.<br />

Blair, B., S.-H. Poon and S.J. Taylor (2001). Forecasting S&P 100 volatility: The incremental information content of<br />

implied volatilities and high frequency index returns, Journal of Econometrics, 105, 5-26.<br />

Britten-Jones, M., and A. Neuberger (2000). Option prices, implied price processes, and stochastic volatility, Journal of<br />

Finance, 55, 839–866.<br />

Canina, L., and S. Figlewski (1993). The informational content of implied volatility, Review of Financial Studies, 6, 3,<br />

659-681.<br />

Carr, P., and D. Madan (1998). Towards a theory of volatility trading. In Jarrow, Robert A. ed.: Volatility: New<br />

estimation techniques for pricing derivatives, Risk Publications, London.<br />

Carr, P., and Wu, L. (2006). A tale of two indices, Journal of Derivatives, 13, 13–29.<br />

Chiras, D., and S. Manaster (1978). The information content of option prices and a test of market efficiency, Journal of<br />

Financial Economics, 6, 213-234.<br />

Christensen, B.J., and N.R. Prabhala (1998). The relation between implied and realized volatility, Journal of Financial<br />

Economics, 50, 2, 125-150.<br />

Corrado, C.J., Miller, T.W. (2005). The Forecast Quality of CBOE Implied Volatility Indexes, Journal of Futures<br />

Markets, 25, 339-373.<br />

Day, T.E., and C.M. Lewis (1993). Forecasting futures market volatility, Journal of Derivatives, 1, 33-50.<br />

Demeterfi, K., Derman, E., Kamal, M., Zou, J. (1999). More than you ever wanted to know about volatility swaps,<br />

Journal of Derivatives, 6, 9–32.<br />

Dupire, B. (1994). Pricing with a smile, Risk 7(1), 18-20.<br />

Fleming, J. (1998). The quality of market volatility forecasts implied by S&P 100 index option prices, Journal of<br />

Empirical Finance, 5, 317-345.<br />

Garman, M.B., and M.J. Klass (1980). On the estimation of security price volatilities from historical data, Journal of<br />

Business, 53, 1, 67-78.<br />

Jiang, G. J., and Y. S. Tian (2005). Model-free implied volatility and its information content, Review of Financial<br />

Studies, 18(4), 1305-1342.<br />

Jorion, P. (1995). Predicting volatility in the foreign exchange market, Journal of Finance, 50, 2, 507-528.<br />

Kumar, R., and Shastri, K. (1990). The Predictive Ability of Stock Prices Implied in Option Premia, in Advances in<br />

Futures and Options Research, Greenwich, CT: JAI Press, Vol. 4.<br />

Kunitomo, N, (1992). Improving the parkinson method of estimating security price volatilities, Journal of Business, 65,<br />

295–302.<br />

Lamoureux, C., and W. Lastrapes (1993). Forecasting stock-return variance: toward an understanding of stochastic<br />

implied volatilities, Review of Financial Studies, 6, 2, 293-326.<br />

Latane, H., and R.J. Rendleman (1976). Standard deviations of stock price ratios implied in option prices, Journal of<br />

Finance, 31, 2, 369-381.<br />

Moraux, F., Navatte, P., and Villa, C. (1999). The Predictive Power of the French Market Volatility Index: A Multi<br />

Horizons Study, European Finance Review, 2, 303-320.<br />

Neuberger, A. (1994). The log contract: a new instrument to hedge volatility, Journal of Portfolio Management, Winter,<br />

74-80.<br />

298


Parkinson, M. (1980). The extreme value method for estimating the variance of the rate of return, Journal of Business,<br />

53, 61-65.<br />

Poon, S.-H., and C.W.J. Granger (2003). Forecasting financial market volatility: a review, Journal of Economic<br />

Literature, 41, 2, 478-539.<br />

Randolph, W.L., and M. Najand (1991). A test of two models in forecasting stock index futures price volatility, Journal<br />

of Futures Markets, 11, 2, 179-190.<br />

Rogers, L.C.G., and S.E. Satchell (1991). Estimating variance from high, low, and closing prices, Annals of Applied<br />

Probability, 1, 504-512.<br />

Schmalensee, R., and R.R. Trippi (1978). Common stock volatility expectations implied by option premia, Journal of<br />

Finance, 33, 1, 129-147.<br />

Simon, D.P. (2003). The Nasdaq Volatility Index During and After the Bubble, Journal of Derivatives, 11, 2, 9-22.<br />

Szakmary, A., E. Ors, J.K. Kim and W.D. Davidson III (2003). The predictive power of implied volatility: Evidence<br />

from 35 futures markets, Journal of Banking and Finance, 27, 2151-2175.<br />

Wiggins, J. B. (1992). Estimating the volatility of S&P 500 futures prices using the extreme value method, Journal of<br />

Futures Markets, 12, 265–273.<br />

Yang D. and Q. Zhang, (2000). Drift-independent Volatility Estimation based on high, low, open, and close prices,<br />

Journal of Business, 73(3), 477-491.<br />

299


LITHIUM POWER: IS THE FUTURE GREEN?<br />

Satyarth Pandey & Veena Choudhary<br />

PGDM-DCP(2010-12)<br />

Institute of Management Technology, Ghzaziabad, India<br />

Email: Satyarthpandey@hotmail.com, Veena207@gmail.com, www.imt.edu<br />

Abstract. The world has been suffering under the pressure of oil for the past so many decades. Sometimes it’s become the cause for<br />

war among some nations, allegedly between USA and Iraq, and sometimes results in disharmony among some, Russia and Ukraine<br />

2005-2006 gas dispute.<br />

This helplessness has forced the research community to search for some alternative sources of fuel to drive our day-to day activities<br />

with as minimum dependency and cost as possible. Some promising alternatives have taken the centre stage “hydrogen fuel cells”<br />

being one of them ,but finding ways to store volatile hydrogen safely and bring down the costs of fuel-cell ingredients, which<br />

currently include the fantastically expensive element platinum, has proved difficult.<br />

Hence the proven alternative that has seen its rise as an economical and efficient fuel material is Lithium – the force behind the<br />

electrical revolution in all sectors including electronics and automobile. The metal is "energy dense" yet “light” enough to be used in<br />

cell phones, laptops or electric cars.<br />

This paper analyses the rise and the future prospects of Lithium based products and whether it will be able to become an alternative<br />

that can make the “black-gold” redundant and this long going search of humanity for something more viable and less scarce in nature.<br />

This paper further presents an analysis of the nations that can gain a pivotal role if Lithium is to rise as the fuel source of the future.<br />

Keywords: Lithium, Li, Fuel, Energy source, Lithium as a fuel, Lithium and oil<br />

JEL classification: Q42<br />

1 Introduction<br />

The rising demand and high cost of oil has been an issue for the brain masters to consider for more than a decade .So<br />

helpless is the scenario of world’s oil dependency that the world and people living in it have started clashing to get<br />

their hands on to as much supply as possible to gain upper hand in world’s economic environment. This helplessness<br />

and the degree of dependence on oil together have led people on search for different alternative of fuel than oil.<br />

The two alternatives that have seen their rise in the recent times are hydrogen fuel cells and Lithium based<br />

lithium-Ion (Li-Ion) batteries. Hydrogen fuel cells through their potent power generation have been a lucrative<br />

prospect for the researchers. But, this option also suffers from its intrinsic pitfalls. Researchers have been fighting on<br />

two of the major issues associated with Hydrogen cells namely reducing cost and improving durability.<br />

Lithium on the other hand has risen like a superstar in the recent times. The advantage with Lithium based Liion<br />

batteries is that they weigh less and have longer storage capacity. This is what makes it the preferred choice of<br />

the current ever growing industry.<br />

The purpose of this paper is to analyze the prospects associated with Lithium such as its demand, availability<br />

and economical impact on the supplying nations.<br />

2 Research & Methodology<br />

The primary purpose of this paper is to highlight and analyze the possibility of using Lithium as a fuel. Answers to<br />

questions such as “What would be the economic impact worldwide if Lithium replaces oil as a fuel?” sought after.<br />

As the knowledge and information regarding the study is limited and more rigorous study has to be done to<br />

collect all the required information the research can be categorized as an Exploratory Research. As the data was<br />

collected and analyzed over a period of time, it was a longitudinal study and was qualitative in nature with all<br />

information being theoretical.<br />

Collation of secondary data and obtaining relevant information from it formed the major research methodology.<br />

300


3 Lithium – The Future Fuel<br />

3.1 What is Lithium?<br />

The Lithium is considered to be the lightest solid element on earth. It is highly reactive metal and quickly tarnishes<br />

in air after just a few minutes. Due to its high reactivity, it only appears naturally in the form of compounds.<br />

According to the Handbook of Lithium and Natural Calcium, "Lithium is a comparatively rare element,<br />

although it is found in many rocks and some brines, but always in very low concentrations. There are a fairly large<br />

number of both lithium mineral and brine deposits but only comparatively a few of them are of actual or potential<br />

commercial value. Many are very small; others are too low in grade."<br />

For a long period of time, lithium was not considered much to have any practical applications in the real world.<br />

But, over the years, lithium’s commercial applications have expanded tremendously. First, the pharmaceuticals<br />

industry discovered that lithium had properties that affected brain chemistry (i.e. mood stabilizers used to treat bipolar<br />

disorder). And, later, lithium was discovered to have ideal qualities for laptop, camera, and mobile phone<br />

batteries.<br />

Although new technologies are—by definition—subject to change, lithium is currently the best raw material for<br />

making rechargeable batteries. Today, lithium-Ion (Li-Ion) batteries power most of the world’s laptops, mobile<br />

phones and cameras. But lithium enthusiasts are also pinning their hopes on the electric car.<br />

3.2 The Process and Players<br />

The Lithium occurs majorly in two types of deposits: (a) Spodumene - a hard silicate mineral (i.e. glass), and (2)<br />

Brine Salt Lake Deposits – dry salt lakes containing lithium chloride (in South America these are called “salares”).<br />

Today, most of the world’s lithium comes from dry salt lakes because these deposits are more economically<br />

viable for making Li-Ion batteries. These lakes result when pools of salt water containing lithium chloride (LiCl)<br />

accumulate in places lacking drainage. Over the centuries the water evaporates leaving a dense layer of salt behind.<br />

Underneath the salt crust is a layer of brine — salty groundwater with a high concentration of lithium chloride. It is<br />

this brine that is pumped out and converted to lithium.<br />

At present, the greatest part of the world’s accessible lithium reserves (over 80%) is in the so- called “Lithium<br />

Triangle”, where the borders of Argentina, Bolivia, and Chile meet. (Evans, 2008; MIR 2008).<br />

According to the United States Geological Survey (USGS), Chile provides 61% of lithium exports to the U.S.<br />

and Argentina is the source of 36%. Chile has a reserve of an estimated 3 million tons and Argentina weighs in with<br />

400,000 tons, while Bolivia’s reserve is calculated as being about 5.4 million tons.<br />

Chile is the world’s largest producer — not only because Chile already has highly developed mining, transport<br />

and processing infrastructure, but also because its climate and geography is favorable for the optimal solar<br />

evaporation that is central to producing lithium. Neighboring Bolivia purportedly has the largest known reserves but<br />

it does not currently produce any lithium. The cost of extracting lithium from Bolivia’s Salar de Uyuni is unknown.<br />

Its ratio of magnesium to lithium – 30:1 units against 6.5:1 in Chile’s Atacama Desert – could put extraction costs at<br />

about USD $5,000 a ton, compared with USD $1,500 in Chile. Currently, lithium is not traded as a commodity so<br />

investors are looking into companies which sell the metal compounds.<br />

301


Figure 1: World Based Lithium Reserves<br />

Figure 2: Lithium Reserves in South American Countries<br />

From figure 1 and figure 2 it is pretty much clear that South America holds the key to Lithium revolution and<br />

the major player in this energy politics seems to be Bolivia who is yet to open its hands to the world.<br />

It is clear that Bolivia is going to be a major player in this economic race and it has already started moving<br />

keeping in mind its future perspective.<br />

Bolivia’s president Evo Morales has a strict agenda of natural resource nationalization. This political agenda<br />

aims to end of the decades of plunder, epitomized by the faded city of Potosi, where hundreds of thousands died in<br />

the silver mines that funded Spain’s Armada. To this effect, Bolivia has ruled out private investment in producing<br />

lithium – however it has left the door open to alliances for value-added products in which the government retains a<br />

60% stake. Any partner would be required to build a plant to make lithium-battery powered cars.<br />

The stakes for Bolivia are considerable. By taking advantage of these substantial natural resources, he can<br />

realistically lift his country out of poverty and increase its role as a significant player on the global market. Even<br />

302


prior to increased demand for lithium, the Bolivia’s mining industry was producing significant gains for the country,<br />

leading to a 9.4% increase in its GDP.<br />

3.3 Rise of Lithium<br />

3.3.1 Industrial Demand and Application of Lithium<br />

The wide variety of Lithium applications can be made from figure 3. Currently, Lithium is covering a huge sector of<br />

applications and making its major impact in the world of batteries.<br />

Figure 3: Industrial Application Of Lithium<br />

Over the past years Lithium-Ion batteries have become the talk of the town. The commercial viability of these<br />

batteries ranges from mobile to car powering batteries.<br />

This has given the hope that Lithium power batteries could actually replace petroleum products in the<br />

automotive industries.<br />

The figure 4 below provides us a clear insight into the Lithium consumption by the end user in various<br />

application areas and provides an estimate of what will be the consumption scenario in the year 2020.<br />

303


Figure 4: Usage of Lithium in Different Industry Segments<br />

The figure 3 & figure 4 were just to create the premise for strong demand of Lithium in various sectors. Due to<br />

growth in each sector especially in the battery segment the demand for Lithium is expected to rise meteorically.<br />

Figure 5: Trend Analysis of Demand of Lithium (2002-2020)<br />

Figure 5 clearly depicts the rise in the demand over the years with an expected rise of up to 360% in the year<br />

2020 from its demand in the year 2002.<br />

3.3.2 Lithium – The Battery Segment<br />

The lithium based battery segment may be termed as the critical area of Lithium usage. This segment is on the watch<br />

list of many people owing to the fact that the Li-ion batteries are the most sought after for use in electric vehicles.<br />

304


According to lithium producer Chemetall, there could be 6 million lithium powered vehicles by 2018. Each of<br />

these vehicles would likely have a 10kWh (kilowatt hour) Li-Ion battery. And, each of these batteries requires<br />

roughly 0.3kg of lithium metal equivalent per kWh of capacity. Therefore, this amounts to a total of 18,000 tons<br />

(6,000,000 vehicles x 0.3kg x 10) of lithium (metal) or 84,000 tons of (Li2CO3). This number equates to almost the<br />

entire 2008 world production of lithium and assumes that all of the lithium would be used to make electric vehicles<br />

(i.e. none in laptops or cell phones). The idea of millions of cars powered by electric batteries is the fuel for the<br />

current enthusiasm over lithium. According to the TRU Group, which is one of the most important consultants in the<br />

lithium market, by 2020 38% of the lithium batteries used will be found in vehicles.<br />

According to Chemetall analysis the following figure depicts the expected rise in the demand of electrical<br />

vehicle.<br />

Figure 6: Expected Rise in Demand of Electrical Vehicle<br />

Though other battery options are available in the market such as Nickel Metal Hydride(Ni-MH), Nickel<br />

Cadmium (Ni-Cd) but Lithium ions batteries tip the others in the competition because of the facts that it has higher<br />

charge density, smaller size, lighter weight and a low self-discharge rate of approximately 5% per month, compared<br />

with over 30% per month in common Ni-MH batteries 10% per month in nickel cadmium batteries. In addition to<br />

the above points Lithium-Ion Batteries deliver performance that helps to protect the environment with features such<br />

as improved charge efficiency without memory effect.<br />

This is one area where Lithium is expected to resolve the global oil crises by lowering the demand for<br />

petroleum for automotive purposes.<br />

3.3.3 The Green Effect<br />

Oil, currently, is also face of the heat on the ecological front. Oil, starting from its process of extraction to its<br />

application is considered harmful for the environment because of large volume of “carbon footprints”.<br />

In contrast to Oil Lithium is extremely environmental friendly option. The extraction process of Lithium is an<br />

eco-friendly one with minimum energy consumption where in a small amount is consumed by pumping the brine<br />

from the ground, while the energy used for the evaporation is solar. Also the desirable salts of lithium or potassium<br />

305


chloride and the by-products such as magnesium or sodium chloride are substances that are already present in the<br />

soil and not carcinogenic or dangerous to the environment.<br />

The obvious question that arises is “what about after the battery discharges?” The answer to above question is<br />

plain and simple. The Lithium ion battery is non-biodegradable, but the substances released are not harmful to the<br />

environment.<br />

3.3.4 Issues and Challenges<br />

According to a research of William Tahil, Research Director-Meridian International Research, Lithium ion<br />

batteries may be sustainable for portable electronics goods, it is not sustainable for EV applications. A balanced<br />

scientific and economic analysis concerning the sustainability of Li-Ion technology for EV applications has not been<br />

performed.<br />

It, further, states that Lithium is not a traded metal but raw Lithium Carbonate was until recently valued at about<br />

$1/kg. During 2005 and 2006 this rose to over $5/kg and apparently some Japanese Li-Ion battery manufacturers are<br />

now offering $10/kg or $10,000 per ton, a tenfold increase in 2 years. This will only continue to rise as supply is<br />

limited to the few brine and salt deposits.<br />

The projected costs for Li-Ion batteries are still in the order of $300 - $450 per kWh even in high production<br />

volume. A 30kWh Li-Ion battery would therefore cost at least $9,000: prohibitive for the mass market. Therefore, he<br />

recommends that the factors – Performance, Safety, Cost, Simplicity, Industrial Availability as well as the very<br />

significant Geostrategic and Environmental Protection implications of dependence on Lithium - should make the<br />

ZnAir and NaNiFeCl batteries the prime choice for meeting the urgent need to reduce the consumption of oil<br />

immediately at all costs or face the consequences of a meltdown in civilization.<br />

4 Summary<br />

To summarize the paper we say that over the past few years we have seen great rise in demand of Lithium in all<br />

segments.<br />

The segment which is has the biggest to gain from Lithium is the battery segment because of rise in demand in<br />

the automobile industry.Associated with this rise are some apprehensions and doubts whether the world actually<br />

holds the amount of Lithium capable to sustain the growing auto demand but we can only know this when that time<br />

actually comes till then it is sure that Lithium will keep on making its gorund in this sector.<br />

Linked with the rise of Lithium lies the fortune of South America especially Bolivia.The country is already<br />

realizing the prospects it holds and is taking steps politically to maintain its edge.<br />

5 References<br />

Global Lithium Availability:A constraint for Electric Vehicles By Paul Gruber & Pablo Medina<br />

The Trouble with Lithium : Implications of Future PHEV Production for Lithium Demand By William Tahil<br />

Trends of R&D on Materials for High-power and Large-capacity Lithium-ion Batteries for Vehicles Applications<br />

By Hiroshi Kawamoto\<br />

http://www.lithiumalliance.org<br />

http://www.darkmatterpolitics.typepad.com<br />

http://www.fmclithium.com/<br />

306


307


CORPORATE FINANCE<br />

308


309


PREDICTING BUSINESS FAILURE USING DATA-MINING METHODS<br />

Sami BEN JABEUR, Youssef FAHMI<br />

Université de Bretagne-Sud (IREA)<br />

benjabeursami@yahoo.fr, fyoussef@hotmail.com<br />

Abstract: The aim of this paper to compare between two statistical methods in predicting corporate financial distress. We will use the<br />

PLS (Partial Least-Squares) discriminant analysis and support vector machine (SVM). The PLS discriminant analysis (PLS-DA)<br />

regression is a method connecting a qualitative variable dependent to a unit on quantitative or qualitative explanatory variables. The<br />

SVM may be viewed as non-parametric techniques. It is based on the use of so-called kernel function which allows optimal separation<br />

of data. In this work we propose to use a French firm for which some financial ratios are calculated.<br />

Keywords: financial distress prediction, PLS discriminant analysis, Support Vector Machine<br />

1. Introduction<br />

This paper joins within the framework of the research works on the models of forecast of bankruptcies, which could<br />

be used to detect the money troubles of the SME (small and medium-sized enterprise).<br />

The research works on the forecast of the financial difficulties of companies are of a big importance for all the<br />

partners of a company. From a manager point of view, have tools of forecast of the financial failures allows to take<br />

in time strategic measures and management corrective appropriate to prevent these failures (Hooted and al 2007).<br />

For other partners, such tools contribute to reduce the informative asymmetry with the company, to detect quickly<br />

the vulnerable companies and to optimize the allocation of their capital (financier, human being, social). The<br />

forecast of the financial difficulties so constitutes a means of diagnosis of the performance of companies and ends in<br />

a classification of these last ones among the failing or not failing companies. The forecast of Failing companies can<br />

help in particular to prevent the difficulties before they are translated by a financial crisis and to set up the necessary<br />

measures (assistants, restructuring) before it is too late.<br />

The use of the models of forecast of the bankruptcy of companies can vary according to tools and<br />

methodologies used, the reserved sample, the explanatory variables chosen as well as the method of validation of the<br />

results.<br />

Before presenting the main methods of forecast used in this research work, it is useful to call back(to remind)<br />

quickly the definitions of the financial distress.<br />

2. Definition of the financial distress<br />

During these last years, the annual flow of failures of companies did not stop growing and this tendency becomes<br />

more marked during the periods of crisis. According to the INSEE( NATIONAL INSTITUTE FOR STATISTICS<br />

AND ECONOMIC STUDIES), the number of failures of companies affected 52103 in 2010.<br />

Table 1: Failures of French companies (published in the BODACC) between 1993 and 2010<br />

Year 1993 1994 1995 1996 1997 1998 1999 2000 2001<br />

Number of<br />

failing<br />

companies<br />

55174 57494 52640 56858 53261 47525 42132 38346 36941<br />

Year 2002 2003 2004 2005 2006 2007 2008 2009 2010<br />

Number of<br />

failing<br />

companies<br />

38202 43068 42368 43138 40407 42840 51254 53739 52103<br />

310


The movement of the failures concerns in particular the companies from 10 to 50 employees and those whose<br />

turnover is included between 500.000 euro and 2 million euro. In 2009 and in 2010, the fiscal measures of reflation<br />

saved between 15.000 and 30.000 companies. SME (SMALL AND MEDIUM-SIZED ENTERPRISE), defined as<br />

the companies of less than 250 employees and less than 50 million euro of turnover, benefited from 40 % of these<br />

measures, that is 6,4 billion euro.<br />

Even if the failure can have multiple reasons (fall in demand and difficulties of access to outlets, problem of<br />

management, bad strategic choices and errors of management, competence and formation of the team, the<br />

obsolescence of the production tool, the undercapitalization, the frauds), it is generally translated by financial<br />

distress and a deterioration of the financial situation of the company (reduction of the activity, the decrease of the<br />

profitability, the cash flow problems, the financial imbalance).<br />

Wruck (1990) defines the financial distress as being a situation where the cash flows are insufficient to cover<br />

the current bonds. These obligations can include the debts suppliers, the expenses. According to Baldwin and Scott (<br />

1983 ), when the situation of a company degrades in the point where she cannot face her financial constraints, the<br />

firm enters a state of financial distress. These same authors assure that this situation is the result of a bad economic<br />

situation, a decline of their performances and a low quality of their management. From their part, Ooghe and Van<br />

Wymeersch ( 1996 ) explain the financial distress in particular by the problems of liquidity (because of an<br />

insufficient profitability and of a lack of resources) and of a high level of debts which pulls problems of solvency of<br />

the company.<br />

Several surrounding areas of forecast of the financial distress of companies were proposed since the end of the<br />

sixties, using financial ratios and accounting data. The purpose is to differentiate the failing companies and the not<br />

failing companies and to build an explanatory model of the failure of companies. The most used techniques are the<br />

univarite approach, the multivariate approach, the discriminating analysis multiple shelf space, the multiple<br />

regression, the logit regression, the models based on the artificial neuronal approach. These methods of forecast<br />

present a joint representation concerning the evaluation of the risk of defect of companies, in the sense that they base<br />

themselves on the financial analysis and the exploitation of the knowledge ex comment of the future of companies<br />

(Redone 2004).<br />

In this work, we are going to apply the discriminating approach and the SVM approach in the detection of the<br />

failing companies. Both approaches will lean on a sample of 800 companies during period 2006 in 2008, as well as<br />

on an recourse to 33 financial ratios.<br />

3. Analysis method<br />

3.1. The discriminating analysis PLS<br />

There are several versions of the algorithm of univariate regression PLS1. They different at the level of the<br />

normalizations (standardizations) and the intermediate calculations, but they give quite the same regression.<br />

According to Bastien, Esposito and Tenenhaus ( 2005 ), the algorithm of the discriminating regression PLS ( PLS-<br />

DA) can decompose as follows: we can repeat this process by using in the same way residues Y2, X21, X2k of the<br />

regressions of Y, X1, Xk on t1, t2.<br />

The algorithm PLS1 spells then:<br />

Stage 1: X0=X; Y0=Y (initialization)<br />

Stage 2: for h=1,…, r ;<br />

X�h<br />

�1y<br />

h �1<br />

Stage 2.1: w h �<br />

y�h<br />

�1yh<br />

�1<br />

Stage 2.2: Standards w in 1 h<br />

X h �1w<br />

h<br />

Stage 2.3: t h �<br />

w�<br />

w<br />

h<br />

h<br />

311


Stage 2.4: p<br />

h<br />

X�h<br />

�1t<br />

�<br />

t�<br />

t<br />

Stage 2.5: X h � X h�1<br />

� t hp'<br />

h<br />

y�h<br />

�1t<br />

h<br />

Stage 2.6: ch<br />

�<br />

t�<br />

t<br />

Stage2.7: u<br />

�<br />

h�1<br />

h<br />

ch<br />

Stage2.8: yh � yh�1<br />

� cht<br />

h<br />

y<br />

h<br />

h<br />

h<br />

h<br />

h<br />

Once we retained the number of the constituents, we make a discriminating analysis on these constituents and not on<br />

the variables of origin.<br />

3.2. The principles of the method SVM<br />

Without going into the technical details, we expose very briefly the principles of the method "Support Vector<br />

Machines".<br />

Support Vector Machines (SVM) are techniques of classification based on the statistical theory of the learning<br />

(Vapnik on 1995, 1998). The SVM can be used to resolve problems of regression, that is predict the numerical value<br />

of a variable, or a discrimination, that is decide to which class belongs a sample.<br />

As method of classification, the approach SVM represents a kind of discriminating analysis generalized, made in a<br />

space of rather big dimension so that exists a linear separation. She takes place in two stages. In the first one, a not<br />

linear transformation makes pass of the space of origin in a space of bigger dimension but endowed with a scalar<br />

product. Assist stage: in we look for a linear separator:<br />

f ( x)<br />

� a � x � b , which is a hyperplane to separate the whole learning points so that all points of the<br />

same class are on the same side of the hyperplane. The latter must simultaneously fulfill two conditions:<br />

- it separates the groups (accuracy of the model), in the sense f (x)<br />

>0 => class A and f (x)<br />

� 0 => class B;<br />

- it is the farthest from all observations (model robustness), knowing that the distance of an observation x to the<br />

hyperplane is a � x � b / a . the margin being<br />

2 / a<br />

2<br />

Given points ( i, i ) y x with i y =1 if i x is in A and i y =-1 if i x is in B, find the linear separator f ( x)<br />

� a � x � b<br />

equivalent to finding a pair (a, b) simultaneously satisfies two conditions:<br />

- For all i, ( a � x � b)<br />

�1<br />

(good separation).<br />

-<br />

yi i<br />

2<br />

a is minimum (maximum margin).<br />

SVMs seek, among all possible hyperplanes, the one that maximizes the distance between the hyperplane decision<br />

and the nearest points of each class.<br />

When the two populations are not perfectly discriminated or separated but overlapping, a terme must be added to to<br />

each of the two previous expressions.<br />

The solution f (x)<br />

expressed as a function of inner products x � x�<br />

After transformation � , it is expressed in<br />

terms of inner products �(<br />

x) ��(<br />

x�)<br />

. The amount ) ( ) ( ) , ( x x x x k � � � � � � is called the nucleus. In the<br />

algorithm, the kernel k and not � chosen and we can calculate k( x,<br />

x�)<br />

without showing � . The calculations are<br />

then made in the original space and become simpler and faster. This is why it is called core or kernel machine.<br />

Examples of cores include:<br />

- Linear k( x,<br />

x�)<br />

� x � x�;<br />

- polynomial<br />

x�<br />

) � ( x � x<br />

d<br />

; if d = 2, ( 1, 2)<br />

x x<br />

2<br />

2<br />

2x�2<br />

) � ( xx<br />

) ;<br />

k ( x,<br />

�)<br />

�(<br />

x) ��(<br />

x�)<br />

� ( x x�<br />

� x �<br />

1<br />

1<br />

312<br />

x � and ) , 2 , ( ) (<br />

2<br />

2<br />

� x � x x x x then<br />

1<br />

1<br />

2<br />

2


2<br />

x�<br />

x�<br />

-<br />

2<br />

2�<br />

Gaussian (RBF) k(<br />

x,<br />

x�<br />

) � e ; one of the most commonly used<br />

- Sigmoid k( x,<br />

x�<br />

) � tanh��<br />

( x � x�)<br />

��<br />

� where � is the gain and � the threshold.<br />

4. Methodological choice<br />

4.1. The construction of the sample<br />

Our approach to data collection involves several steps: the choice of the database, the selection of companies and the<br />

indicators of financial failure.<br />

4.1.1. Presentation of data:<br />

We use the database DIANE (instant access to data from French companies for economic analysis) for our sample.<br />

This database provides access to a fund composed of more than one million businesses, as these are the companies<br />

that publish their annual accounts within the registries of commercial courts. Collected by Coface Services,<br />

information on company accounts and is enriched with lots of related information. With ten years of historical<br />

accounts, DIANE is the basis of financial information and general on French enterprises the most comprehensive.<br />

4.1.2. The survey sample:<br />

A company was considered deficient if it has been a first event declaration to the judicial tribunal of commerce<br />

during 2009. The data studied is organized so that the accounting year is available for 2008, 2007 and 2006. The<br />

final sample obtained upon completion of this rigorous selection process consists of 800 companies split into two<br />

sub-samples: 400 healthy firms and 400 failing firms. This choice was dictated by itself, because of constraints<br />

arising from the availability of identifying information of companies.<br />

4.2. The selection of indicators and financial ratios<br />

Selecting financial ratios has been a more logical and methodological choice in order to provide a relevant and<br />

credible battery capable of meeting the objectives and expectations of external reviews. Financial ratios were<br />

selected according:<br />

- their recurrence in French literature (Bardos, 1995, Bank of France’s work) and international literature (Altman,<br />

1968, 1984, Conan and Holder 1979; Rose and Giroux,1984, Remade, 2004).<br />

- their relevance to financial analysis, incorporating the basic ratios in most existing models of detecting<br />

bankruptcy: liquidity ratios, profitability management, productivity and financial structure. Thus, a series of 33<br />

ratios (R01 to R33) was retained among those commonly used in literature and which have a significant<br />

informational content in the analysis of the financial situation of the company.<br />

5. Results<br />

5.1. The classification of entreprises by the method of PLS discriminate analysis<br />

PLS-DA analysis allowed retaining six components, the classification results are presented in the following table:<br />

313


1, 2 or 3 year before failure Nature of the firm<br />

Group<br />

Trust<br />

real<br />

Number<br />

%<br />

healthy<br />

H: healthy / D: defaulting<br />

Table 2:Model validation (PLS-DA)<br />

T-1: According to this table, we can say that the results obtained using the PLS-DA method are attractive<br />

compared to discriminate analysis (DA1). Indeed, the rate of correct classification of companies is about 96.50%, an<br />

error of type 1 (percentage of failing firms considered healthy) of 5.75% and a type 2 error (percentage of nonfailing<br />

companies considered risky) of 1.25%. According to this analysis PLS-DA was able to identify 377 healthy<br />

firms, a rate of 94.25%, with only five failing firms are classified as healthy, a rate of 98.75%.<br />

T-2: In the group of healthy companies, the model holds 126 companies as being dysfunctional; thus they are<br />

really not. Also in the group of failing firms, 137 of them are considered healthy, which is not the case.<br />

The study of the “two years before failure” model, reveals a decrease in classification rate of healthy firms from<br />

94.25% to 68.5%; and 98.75% of companies falling to 65.75%. So the more the forecast is, the less the accuracy of<br />

the model is.<br />

T-3: Looking at this table, we can say that the results obtained using the PLS method are attractive compared to<br />

discriminate analysis. Indeed, the rate of correct classification of companies is about 60.5%, a second type error of<br />

38.75% and a first type error is about 40.25%.<br />

Regarding the predictive power of the model in the span of time from two to three years before the failure, we<br />

find a drop in clearance rates in healthy firms from 68.5% to 61.25% and 65.75% of failing companies to 59.75 %.<br />

Thus, the greater the forecast horizon is far, the more the ability of the model is reduced. In conclusion, we detect a<br />

superiority of PLS regression in one, two and three years before failure compared to traditional discriminate<br />

analysis. Indeed, one year before failure, the rate of correct classification of firms is 95.9% for AD1 and 96.5% for<br />

the analysis PLS-DA1. Two years before the failure, the rate of correct classification is 64.8% to 67,125% for AD2<br />

and PLS-DA 2. Regarding the third year, the rate of correct classification is 59.25% for AD 3 and 61.25% for PLS-<br />

DA3. These results were confirmed by previous work applied to scientific fields other than the prediction of failure<br />

by Nguyen and Roke (2004), Bastien et al. (2005), who believe that the role of restrictive assumptions of<br />

discriminate analysis from the PLS regression can be successfully applied to financial data.<br />

5.2. The classification of enterprises by the SVM method<br />

Allocation group under the model<br />

T-1 T-2 T-3<br />

S D Total H D Total H D Total<br />

377 23 400 274 126 400 245 155 400<br />

Defaulting 5 395 400 137 263 400 161 239 400<br />

healthy<br />

94,25 5,75 100 68,5 31,5 100 61,25 38,75 100<br />

Defaulting 1,25 98,75 100 34,25 65,75 100 40,25 59,75 100<br />

The results of the classification of enterprises by the SVM method are presented in the following table:<br />

314


1, 2 or 3 year Before the<br />

failure<br />

Group Trust<br />

real<br />

number<br />

%<br />

Nature of the firm<br />

Table 2: Validation of SVM model<br />

T-1: one year before failure, the rate of correct classification of firms is around94.875, a type 2 error of 7.5%<br />

and a type 1 error of 2.75%. The SVM method was able to identify 370 of healthy companies, a rate of 92,5% while<br />

30 failed businesses are classified as healthy firms. Similarly, it was able to identify 389 firms failed, a rate of<br />

97.25%. Healthy companies classified as failing firms are 11 in number.<br />

T-2: Two years before the failure, the SVM approach gave a rate of correct classification of the order of<br />

65.625% with a type 2 error of 38% and a type 1 error of 30.75%. This method was able to identify 248 healthy 248<br />

healthy firms, a rate of 62% while 152 companies are ranked among failed business. In addition, it was able to<br />

identify 277 failed companies, a rate of 69.25%. Healthy companies classified as failing firms are 123.<br />

T-3: Three years before the failure, the SVM approach gave a rate of correct classification of approximately<br />

59,25% with a type 2 error of 29% and a type 1 error of 52.5 %. This method was able to identify 284 healthy<br />

companies, a rate of 71% while 116 companies are ranked among failedbusinesses. In addition, it was able to<br />

identify 190 failed companies, a rate of 47.5%. Healthy companies classified as failing firms are 210.<br />

5.3. Comparison of two methods of classification<br />

The following table compares the performance in anticipation of the PLS-DA and SVM approaches.<br />

Healthy firms Failing firms Good ranking<br />

PLS-DA SVM PLS-DA SVM PLS-DA SVM<br />

T-1 94,25 % 92,5 % 98,75 % 97,25 % 96,5 % 94,875 %<br />

T-2 68,5 % 62 % 65,5 % 69,25 % 67 % 65,625 %<br />

T-3 61,25 % 71 % 59,75 % 47,5 % 60,5 % 59,25 %<br />

Table 3: PLS-DA and SVM Performance approaches<br />

Allocation group under the model<br />

T-1 T-2 T-3<br />

S D Total H D Total S D Total<br />

healthy 370 30 400 248 152 400 284 116 400<br />

Defaulting 11 389 400 123 277 400 210 190 400<br />

healthy 92,5 7,5 100 62 38 100 71 29 100<br />

Defaulting 2,75 97,25 100 30,75 69,25 100 52,5 47,5 100<br />

The rates of correct classification provided by the two methods are very close. The rate of correct classification<br />

of non-failing firms is the share of non-failing firms correctly classified all non-failing firms. Similarly, the rate of<br />

correct classification is failing firms from failing firms correctly classified in all the failing companies.<br />

The comparison of the overall performance of the two methods shows a clear superiority to the PLS-DA. A year<br />

before the failure, the PLS-DA is more effective than SVM approach since it leads to a correct classification rate of<br />

96.5%, against 94.875 for the SVM method. Two years before the failure rates were 67% for the PLS-DA and<br />

65.625% for the SVM method. Finally, three years before failure, the rates were 60.5% for the PLS-DA and 59, 25%<br />

for the SVM method.<br />

315


The SVM method is superior to PLS-DA method only in terms of the classification of two failed businesses before<br />

failure (69.25% against 65.5% for the PLS-DA), and for classification non-failing firms three years before failure<br />

(71% against 61.25% for the PLS-DA).<br />

6. Conclusion<br />

As has been demonstrated by the previous studies and recent research in this field, we find that the predictive power<br />

is weakened for models that use less information, or to recent models that are intended to be used to predict business<br />

failure over a more distant period of time. In conclusion, we detected a clear strength in the PLS-DA method in<br />

terms of one and three years before failure-analysis compared to the SVM method. We have shown that the<br />

principles of PLS regression could be without difficulty discriminate analysis, PLS regression has an advantage in<br />

terms of classification, taking into account important variables in the analysis, it can also solve the problem of<br />

correlation between different variables while the majority of conventional methods grounded before such a problem.<br />

7. Bibliography<br />

Altman E.I. (1968), Financial ratios, discriminant analysis and the prediction of corporate bankruptcy, Journal of<br />

Finance, 23(4), September, 589-609.<br />

Albert A., Anderson J. (1984), On the Existence of Maximum Likelihood Estimates in Logistic Regression Models,<br />

Biometrika, 71(1), 1-10.<br />

Amemiya T. (1981), Qualitative Response Models: A Survey, Journal of Economic Literature, 19(4), 481-536.<br />

Banque de france, Gourieroux C., Foulcher S., Tiomo A. (2003), La structure par termes des taux de défauts et<br />

ratings, Cahiers Etudes et recherchesdel'Observatoiredesentreprises, Direction desEntreprises, Banque de France,<br />

1.34.<br />

Banque de France, Stili D. (2002), Détection précoce du risque de défaillance dans le secteur de la construction.<br />

Cahiers Etudes et recherches de l’observatoire des entreprises, Direction des entreprises, Banque de France, 1.70.<br />

Banque de France, Planes B. (2000), Détection précoce du risque de défaillance dans le secteur hôtels/restaurants:<br />

SCORE BDFHR, Cahiers Etudes et recherchesdel'Observatoiredesentreprises, Direction des Entreprises, Banque de<br />

France, 1.55.<br />

Bardos M. (2008), Scoring sur données d’entreprises : instrument de diagnostic individuel et outil d’analyse de<br />

portefeuille d’une clientèle, Revue MODULA, 38, 159 – 177.<br />

Bardos M. (2001) : Analyse discriminante: application au risque et scoring financier, Dunod.<br />

Bardos M. (1998), Detecting the risk of company failure at the Banque de France, Journal of Banking and Finance,<br />

22, 1405-1419.<br />

Bardos M., Zhu W.H. (1997), Comparaison de l'analyse discriminante linéaire et des réseaux neuronaux :<br />

application à la détection de défaillance d'entreprises, Revue Statistique Appliquée, XLV (4), 65-92.<br />

Bastien P, Vinzi V. E, Tenenhaus M. (2005), PLS generalized linear regression, Computational Statistics and Data<br />

Analysis, 48-1.17-46.<br />

Ben Jabeur.S.(2009), Prévision de la détresse financière des entreprises : approche par l’analyse discriminante<br />

PLS, AFFI 2009 colloque international de l’association française de la finance, 13-14-15 Mai, Brest.<br />

Chen W-S., Yin-Kuan Du Y-K.(2009), Using neural networks and data mining techniques for the financial distress<br />

prediction model, Expert Systems with Applications 36, 4075-4086.<br />

Chou H-I., Li H., Yin X. (2010), The effects of financial distress and capital structure on the work effort of outside<br />

directors, Journal of Empirical Finance 17, 300–312.<br />

Ding Y., Song X., Zen Y. (2008), Forecasting financial condition of Chinese listed companies based on support<br />

vector machine, Expert Systems with Applications 34, 3081–3089.<br />

Esposito Vinzi V, Trinchera L, Squillacciotti S, Tenenhaus M. (2008), REBUS-PLS: A response-based procedure<br />

for detecting unit segments in PLS path modeling, Applied Stochastic Models in Business and Industry, 24-5, 439-<br />

458.<br />

Fisher N. (2004), Fusion statistique de fichiers de données, Thèse de doctorat, CNAM, Paris.<br />

Fort G., Lambert-Lacroix S. (2005), Classification using Partial Least Squares with Penalized Logistic Regression,<br />

Bioinformatics, 21(7), 1104-1111.<br />

Franks J. R., Sussman O. (2005), Financial distress and bank restructuring of small to medium size U.K. companies,<br />

The Review of Finance 9, 65–96.<br />

316


Guilhot B. (2000), Défaillances d'entreprise : soixante-dix ans d'analyse théoriques et empiriques, Revue Française<br />

de Gestion, 130, 52-67.<br />

Hamza T., Bagdadi K. (2008), Profil et déterminants financiers de la défaillance des PME tunisiennes (1993-2003),<br />

Banque & Marchés, 93, 45-62.<br />

Hui L., Jie S., Jian W. (2010), Predicting business failure using classification and regression tree: An empirical<br />

comparison with popular classical statistical methods and top classification mining methods, Expert Systems with<br />

Applications 37, 5895–5904.<br />

Jones S., Hensher D.A. (2004), Predicting firm financial distress: a mixed Logit model, The Accounting Review,<br />

79(4), 1011-1038.<br />

Koenig G. (1985), Entreprises en difficulté: des symptômes aux remèdes, Revue Française de Gestion,<br />

janvier/février, 84-92.<br />

Koenig G. (1991), Difficultés d’entreprise et inertie active », Direction et Gestion, 126-127.<br />

Lin T. H. (2009), A cross model study of corporate financial distress prediction in Taiwan: Multiple discriminant<br />

analysis, logit, probit and neural networks models, Neurocomputing 72, 3507-3516.<br />

Li H., Jie Sun J. (2009), Predicting business failure using multiple case-based reasoning combined with support<br />

vector machine, Expert Systems with Applications 36, 10085–10096.<br />

Li H, Sun J, Wu J. (2010), Predicting business failure using classification and regression tree: An empirical<br />

comparison with popular classical statistical methods and top classification mining methods, Expert Systems with<br />

Applications 37, 5895–5904<br />

Nguyen D., Rocke D. (2004), On partial least squares dimension reduction for micro-array-based classification: a<br />

simulation study, Comput. Stat. Data Anal, 46, 407-425.<br />

Ohlson J. A. (1980), Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting<br />

Research, 18-1,109-131.<br />

Ohlson J.A. (1980), Financial ratios and the probabilistic prediction of bankruptcy, Journal of Accounting<br />

Research, spring, 109-131.<br />

Pages J., Tenenhaus M. (2001), Multiple factor analysis combined with PLS path modeling, Revue de Statistique<br />

Appliquée, XLIV (2), 35-60.<br />

Pindado J., Rodrigues L., Torre C. (2008), Estimating financial distress likelihood, Journal of Business Research 61,<br />

995–1003.<br />

Platt H. D., Platt M. B. (1990), Development of a Class of Stable Predictive Variables: The Case of Bankruptcy<br />

Prediction, Journal of Business Finance and Accounting, 17, 1, pp. 31-51.<br />

Platt H. D., Platt M. B. (2002), Predicting Corporate Financial Distress: Reflections on Choice-Based Sample Bias,<br />

Journal of Economics and Finance, vol. 26, 2, 184-199.<br />

Refait C. (2004), La prévision de la faillite fondée sur l’analyse financière de l’entreprise: un état des lieux,<br />

Économie et Prévision, 162, 129-147.<br />

Ross S.A., Westerfield R.W., Jaffe J.F. (2005), Finanzas corporativas, Ed. McGraw Hill (7ª edición).<br />

Saporta G., Preda C. (2002), Régression PLS sur un Processus stochastique, Revue de statistique Appliquée, 50 (2),<br />

27-45.<br />

Stéphane T (2007), Data Mining et statistique décisionnelle, Éditions Technip, 3 éme édition revue et enrichie, juin<br />

2007.<br />

Sun J., Li H. (2010), Financial distress early warning based on group decision making, Computers & Operations<br />

Research 36, 885 – 906.<br />

Tenenhaus M. (1998), La régression PLS : théorie et pratique, Technip, Paris.<br />

Tenenhaus M. (1995), Nouvelles méthodes de régression PLS, Les cahiers de recherche, CR 540.<br />

Tenenhaus M. (2000), La Régression Logistique PLS, Journées d’Etudes en Statistique, Modèles Statistiques pour<br />

données Qualitatives, 261-273.<br />

Tenenhaus M., Gauchi J.P., Menardo C. (1995), Régression PLS et Applications, Revue de Statistiques Appliquée,<br />

43(1), 7-63.<br />

Wei-Sen C., Yin-Kuan D. (2009), Using neural networks and data mining techniques for the financial distress<br />

prediction model, Expert Systems with Applications 36, 4075–4086.<br />

Wruck K.H. (1990), Financial distress, reorganization, and organizational efficiency, Journal of Financial<br />

Economies, 27, 419-444.<br />

Zopounidis C. (1995), Evaluation du risque de défaillance de l'entreprise : Méthodes et cas d'application,<br />

Economica, série Techniques de Gestion, Paris.<br />

Zhongsheng H., Yu W., Xiaoyan Xu, Zhang B., Liang L. (2007), Predicting corporate financial distress based on<br />

integration of support vector machine and logistic regression, Expert Systems with Applications 33, 434–440.<br />

317


SIMULTANEOUS DETERMINATION OF CORPORATE DECISIONS: AN EMPIRICAL<br />

INVESTIGATION USING UK PANEL DATA<br />

Qingwei Meng, Birmingham Business School, University of Birmingham<br />

Email: qxm645@bham.ac.uk<br />

Abstract: We empirically investigate the joint determination of corporate investment, financing, and payout decisions under financial<br />

constraints and uncertainty, using a panel of UK-listed firms. We model these corporate decisions within a simultaneous equations<br />

system, where we treat each decision as endogenous and allow for their contemporaneous interdependences, as implied by the flow-offunds<br />

framework. We find that capital investment and dividend payout as competing uses of limited funds are negatively interrelated,<br />

while both of them are positively interrelated to net amounts of new debt issued during the corresponding period, suggesting the<br />

existence of a joint determination of corporate decisions under financial constraints. We also offer the first attempt to examine the<br />

effects of uncertainty on the set of corporate decisions within the simultaneous equations system. We find that the effect of uncertainty<br />

on corporate investment is significant and positive, while that of uncertainty on dividend payouts is significant and negative. We<br />

further observe that the simultaneity among the corporate decisions is more pronounced for firms which are financially moreconstrained,<br />

while the effect of uncertainty is more significant for firms which are financially less-constrained. Accordingly, our results<br />

suggest that financial constraints intensify the simultaneity among corporate decisions, and reduce managerial flexibility in response to<br />

uncertainty. Therefore, this paper reveals new insight into the complex interdependence of corporate behaviour by UK-listed firms,<br />

under financial constraints and uncertainty.<br />

Keywords: Corporate investment; Debt financing; Dividend payout; Uncertainty; UK-listed firms<br />

JEL classification: G31; G32; G35<br />

References:<br />

Abdul, H., & Wang, S. (2008). Uncertainty and investment evidence from a panel of Chinese firms. Structural<br />

Change and Economic Dynamics, 19(3), 237-248.<br />

Abel, A. B. (1983). Optimal investment under uncertainty. American Economic Review, 73(1), 228-233.<br />

Almeida, H., & Campello, M. (2007). Financial constraints, asset tangibility, and corporate investment. Review of<br />

Financial Studies, 20(5), 1429-1460.<br />

Baker, H. K., Powell, G. E., & Veit, E. T. (2002). Revisiting managerial perspectives on dividend policy. Journal of<br />

Economics and Finance, 26(3), 267-283.<br />

Baum, C. F., Stephan, A., & Talavera, O. (2009). The effects of uncertainty on the leverage of nonfinancial firms.<br />

Economic Inquiry, 47(2), 216-225.<br />

Baum, C. F., Caglayan, M., & Talhadura, O. (2008). Uncertainty determinants of firm investment. Economics<br />

Letters, 98(3), 282-287.<br />

Bharath, S. T., Pasquariello, P., & Wu, G. (2009). Does asymmetric information drive capital structure decisions?<br />

Review of Financial Studies, 22(8), 3211-3243.<br />

Bloom, N., Bond, S., & Van Reenen, J. (2007). Uncertainty and Investment Dynamics. Review of Economic Studies,<br />

74(2), 391-415.<br />

Bond, S. R., & Cummins, J. G. (2004). Uncertainty and investment: An empirical investigation using data on<br />

analysts’ profits forecasts. Finance and Economics Discussion Series Working Paper, No. 2004(20).<br />

Bond, S., & Meghir, C. (1994). Dynamic investment models and the firm’s financial policy. Review of Economic<br />

Studies, 61(2), 197-222.<br />

Bond, S., Elston, J. A., Mairesse, J., & Mulkay, B. (2003). Financial factors and investment in Belgium, France,<br />

Germany, and the United Kingdom: A comparison using company panel data. Review of Economics and<br />

Statistics, 85(1),153-165.<br />

Brailsford, T. J., Oliver, B. R., & Pua, S. L. H. (2002). On the relation between ownership structure and capital<br />

structure. Accounting and Finance, 42(1), 1-26.<br />

318


Brav, A., Graham, J. R., & Harvey, C. R., & Michaely, R. (2005). Payout policy in the 21st century. Journal of<br />

Financial Economics, 77(3), 483-527.<br />

Bulan, L. T. (2005). Real options, irreversible investment and firm uncertainty: New evidence from U.S. firms.<br />

Review of Financial Economics, 14(3-4), 255-279.<br />

Carruth, A., Dickerson, A., & Henley, A. (2000). What do we know about investment under uncertainty? Journal of<br />

Economic Surveys, 14(2), 119-153.<br />

Chay, J. B., & Suh, J. (2009). Payout policy and cash-flow uncertainty. Journal of Financial Economics, 93(1), 88-<br />

107.<br />

Cleary, S., Povel, P., & Raith, M. (2007). The U-shaped investment curve: Theory and evidence. Journal of<br />

Financial and Quantitative Analysis, 42(1), 1-39.<br />

Denis, D. J., & Osobov, I. (2008). Why do firms pay dividends? International evidence on the determinants of<br />

dividend policy. Journal of Financial Economics, 89(1), 62-82.<br />

Dhrymes, P. J., & Kurz, M. (1967). Investment, dividend, and external finance behaviour of firms. In R. Ferber<br />

(Ed.), Determinants of Investment Behaviour (pp. 427-467). New York: Columbia University Press.<br />

Ding, X., & Murinde, V. (2010). Simultaneous financial decision-making: Evidence from UK Firms. Strategic<br />

Change, 19(1-2), 45-56.<br />

Dixit, A. K., & Pindyck, R. S. (1994). Investment under Uncertainty. Princeton, N.J.: Princeton University Press.<br />

Erickson, T., & Whited, T. M. (2000). Measurement error and the relationship between investment and Q. Journal<br />

of Political Economy, 108(5), 1027-1057.<br />

Fama, E. F., & French, K. R. (2001). Disappearing dividends: Changing firm characteristics or lower propensity to<br />

pay? Journal of Financial Economics, 60(1), 3-43.<br />

Fazzari, S. M., Hubbard, R. G., & Peterson, B. C. (1988). Financing constraints and corporate investment. Brookings<br />

Papers on Economic Activity, 1988(1), 141-206.<br />

Ferris, S. P., Jayaraman, N., & Sabherwal, S. (2009). Catering effects in corporate dividend policy: The international<br />

evidence. Journal of Banking and Finance, 33(9), 1730-1738.<br />

Flannery, M. J., & Rangan, K. P. (2006). Partial adjustment toward target capital structures. Journal of Financial<br />

Economics, 79(3), 469-506.<br />

Frank, M. Z., & Goyal, V. K. (2003). Testing the pecking order theory of capital structure. Journal of Financial<br />

Economics, 67(2), 217-248.<br />

Frank, M. Z., & Goyal, V. K. (2009). Capital structure decisions: Which factors are reliably important? Financial<br />

Management, 38(1), 1-37.<br />

Ghosal, V., & Loungani, P. (2000). The differential impact of uncertainty on investment in small and large<br />

businesses. Review of Economic and Statistics, 82(2), 338-349.<br />

Guariglia, A. (2008). Internal financial constraints, external financial constraints, and investment choices: Evidence<br />

from a panel of UK firms. Journal of Banking and Finance, 32(9), 1795-1809.<br />

Gugler, K. (2003). Corporate governance, dividend payout policy, and the interrelation between dividends, R&D,<br />

and capital investment. Journal of Banking and Finance, 27(7), 1297-1321.<br />

Hartman, R. (1972). The effects of price and cost uncertainty on investment. Journal of Economic Theory, 5(2), 258-<br />

266.<br />

Hennessy, C. A., Levy, A., & Whited, T. M. (2007). Testing Q theory with financing frictions. Journal of Financial<br />

Economics, 83(3), 691-717.<br />

Khan, T. (2006). Company dividends and ownership structure: Evidence from UK panel data. Economic Journal,<br />

116(510), 172-189.<br />

Lee, Y. T., Liu, Y. J., Roll, R., & Subrahmanyam, A. (2006). Taxes and dividend clientele: Evidence from trading<br />

and ownership structure. Journal of Banking and Finance, 30(1), 229-246.<br />

Lensink, R., & Murinde, V. (2006). The inverted-U hypothesis for the effect of uncertainty on investment: Evidence<br />

from UK firms. European Journal of Finance, 12(2), 95-105.<br />

319


Lensink, R., & Sterken, E. (2000). Capital market imperfections, uncertainty and corporate investment in the Czech<br />

Republic. Economic of Planning, 33(1-2), 63-70.<br />

Mahajan, A., & Tartaroglu, S. (2008). Equity market timing and capital structure: International evidence. Journal of<br />

Banking and Finance, 32(5), 754-766.<br />

Marsh, P. (1982). The choice between debt and equity: an empirical study. Journal of Finance, 37(1), 121-144.<br />

Mason, R., & Weeds, H. (2010). Investment, uncertainty and pre-emption. International Journal of Industrial<br />

Organization, 28(3), 278-287.<br />

McCabe, G. M. (1979). The empirical relationship between investment and financing: A new look. Journal of<br />

Financial and Quantitative Analysis, 14(1), 119-135.<br />

McDonald, J. G., Jacquillat, B., & Nussenbaum, M. (1975). Dividend, investment and financing decisions:<br />

Empirical evidence on French firms. Journal of Financial and Quantitative Analysis, 10(5), 741-755.<br />

Miller, H. M., & Rock, K. (1985). Dividend policy under asymmetric information. Journal of Finance, 40(4), 1031-<br />

1051.<br />

Mougoué, M., & Mukherjee, T. K. (1994). An investigation into the causality among firms’ dividend, investment,<br />

and financing decisions. Journal of Financial Research, 17(4), 517-530.<br />

Mueller, D. C. (1967). The firm decision process: An econometric investigation. Quarterly Journal of Economics,<br />

81(1), 58-87.<br />

Myers, S. C., & Majluf, N. S. (1984). Corporate financing and investment decisions when firms have information<br />

that investors do not have. Journal of Financial Economics, 13(2), 187-221.<br />

Noronha, G. M., Shome, D. K., & Morgan, G. E. (1996). The monitoring rationale for dividends and the interaction<br />

of capital structure and dividend decisions. Journal of Banking and Finance, 20(3), 439-454.<br />

Peterson, P. P., & Benesh, G. A. (1983). A reexamination of the empirical relationship between investment and<br />

financing decisions. Journal of Financial and Quantitative Analysis, 18(4), 439-453.<br />

Ravid, S. A. (1988). On interactions of production and financial decisions. Financial Management, 17(3), 87-99.<br />

Sant, R., & Cowan, A. R. (1994). Do dividends signal earnings? The case of omitted dividends, Journal of Banking<br />

and Finance, 18(6), 1113-1133.<br />

Shyam-Sunder, L., & Myers, S. C. (1999). Testing static tradeoff against pecking order models of capital structure.<br />

Journal of Financial Economics, 51(2), 219-244.<br />

Skinner, D. J., & Soltes, E. (2011). What do dividends tell us about earnings quality? Review of Accounting Studies,<br />

16(1), 1-18.<br />

Wang, D. H. M. (2010). Corporate investment, financing, and dividend policies in the high-tech industry. Journal of<br />

Business Research, 63(5), 486-489.<br />

320


LEVERAGE ADJUSTMENT AND COST OF CAPITAL<br />

Sven Husmann, Michael Soucek & Antonina Waszczuk, European University Viadrina, Frankfurt (Oder), Germany<br />

Email: euv36052@europa-uni.de, www.europa-uni.de<br />

Abstract The classical approach to obtain unlevered cost of capital using CAPM is to adjust beta coefficients with one of the formulas<br />

proposed by e.g. Modigliani & Miller (1959, 1963) or Miles & Ezzell (1985) to remove the financial effects of the leverage. As a<br />

result we obtain the implicit unlevered beta. The static character of such an adjustment demands an assumption about the time point<br />

related leverage ratio which is not in line with its time varying character. This study investigates the link between the capital structure<br />

theory and the asset pricing models for the American stock market. The unlevered beta is a measure of the systematic risk of a purely<br />

self-financed company. To capture the leverage dynamics it is reasonable to estimate the risk parameter from the in advance unlevered<br />

returns. However, in a majority of cases the implicit unlevered risk parameters differ from the estimates based on adjusted returns.<br />

This difference is statistically significant in 70% of cases for CAPM and in almost all cases for Fama & French (1993) risk loadings.<br />

The error is smaller for Miles & Ezzell adjustment formula. We proved that the traditional approach to unlever risk parameters does<br />

not provide correct results and the unlevered parameters have to be estimated from in advance adjusted returns. The market value of<br />

debt has been estimated using Merton model (1974).<br />

Keywords: CAPM, Fama-French Model, Modigliani-Miller Adjustment, Miles-Ezzell Adjustment, Unlevering, Asset Beta<br />

JEL classification: G11, G12, G32<br />

1 Introduction<br />

The relevance of capital structure and the closely related estimation of firm-specific cost of capital is one of the most<br />

important questions in modern finance. Based on the results of Modigliani & Miller (1959, 1963, further MM) the<br />

relation between the values of levered and fictive unlevered enterprises was constructed which considers the<br />

financing policy and tax shields. The well known studies of Hamada (1969, 1972), Bowman (1980) and Conine<br />

(1980) create a link between the MM proposition III and Capital Asset Pricing Model of Lintner and Sharpe (1964)<br />

or Mossin (1966) and derive the adjustment formulas for firm-individual beta risk factors. The resulting asset beta is<br />

interpreted as the measure of systematic risk of a fictive unlevered firm which captures exclusively its business risk.<br />

Miles & Ezzell (1980, 1985, further ME) extend the concept of MM and show an analogous relation for the cost of<br />

capital under the assumption of value-oriented financing policy, i.e. under the assumption of deterministic debt-toequity<br />

ratio. Both adjustment formulas are classical text book methods to separate financial and business risks.<br />

Numerous authors investigate the empirical validity of MM theory. At the same time the question of an actual<br />

consistency of adjustment formulas with classical asset pricing models is left unexplored. This aspect is discussed to<br />

some extent only in relation to CAPM (Hamada (1972), Marston & Perry (1996)), while the context of multifactor<br />

pricing models has not been investigated so far. A short theoretical introduction to Fama & French (further FF)<br />

Three Factor Model was delivered by Lally (2004) and Dempsey (2009) but to our knowledge there has been no<br />

empirical investigation dealing with this question.<br />

The estimated parameters of commonly used asset pricing models reflect the information about a dynamic change of<br />

leverage. However the use of classical adjustment formulas leave the non-static character of capital structure<br />

neglected, since the leverage mimicking variable has to be approximated at a given point of time. The main aim of<br />

our study is to quantify the error which emerges when the adjustment formula is applied directly on the risk<br />

parameter. We show that the resulting loss of information is significant. If both analyzed asset pricing models are<br />

valid then the commonly used method does not deliver a reliable reflection of the operative risk of the examined<br />

firm. We argue that the direct estimation of unlevered parameters is a much sounder method of unlevering.<br />

In comparison to earlier studies we extend our analysis to both most prominent capital structure models of MM and<br />

ME. Additionally we check the relevance of the choice of possibly realistic leverage variable. The use of book<br />

values for estimation of leverage ratio delivers little information concerning the dynamics of capital structure and<br />

seems to be the least appropriate approximation for our study. Furthermore the estimation of leverage ratio in yearly<br />

intervals causes abrupt changes of capital structure which may be incompatible with theory. For that reason the main<br />

part of our paper uses the leverage in market values where the value of debt is estimated using Merton model (1974)<br />

instead of the commonly used approximation with its book value.<br />

321


The remainder of the paper is structured as follows: second and third section describe dataset and methodology. The<br />

fourth section discusses the results. The fifth section concludes the paper.<br />

2 Data<br />

Our sample consists of stocks listed on NYSE and NASDAQ between 1989 and 2008. We limit our analysis to nonfinancial<br />

1 stocks for which price, market capitalization (given as price times shares outstanding), pretax income,<br />

income tax, interest expense on debt and total liabilities are available in Thomson DataStream Database for the<br />

whole study period. To control for extreme observations we exclude stocks with leverage ratio higher than 10. Such<br />

companies are either threatened with insolvency or even already insolvent. The final dataset consists of 418 stocks 2 .<br />

Our study regards estimation of the following further variables: tax rate is proxy with quotient of income tax and<br />

pretax income at time t 3 . The firm-specific cost of debt is calculated as the ratio of interest expenses and total<br />

liabilities (book value) or market value of debt from Merton model (market value). In contrary to common<br />

procedure of calculating costs of debt from Credit Default Spreads and Ratings we reach out to firms’ accounting<br />

data. Because of the close relation between adjustment formulas and tax-shield we are interested in the exact actual<br />

interest and tax expenses. The leverage ratio in book (market) values is given as the quotient of total liabilities and<br />

book (market) value of equity.<br />

2.1 Merton Model<br />

The most studies on this topic approximate the leverage ratio using market value of equity and book value of certain<br />

balance sheet debt accounts (Sweeney, Warga and Winters 1997). Nevertheless, according to the theory of capital<br />

structure, the leverage in market values should be considered. In our paper, we estimate the market value of debt<br />

using Merton model (Merton 1974). This model assumes the equity being an European option on the company’s<br />

assets. It follows:<br />

where<br />

d<br />

1<br />

E<br />

0<br />

�rT<br />

= V0<br />

N ( d1<br />

) � De N ( d 2 )<br />

(1)<br />

2<br />

ln(<br />

V0/<br />

D)<br />

� ( r ��<br />

V /2) T)<br />

=<br />

and d 2 d1<br />

��<br />

V T<br />

� T<br />

=<br />

V<br />

� N(<br />

d ) � V .<br />

(2)<br />

E E0 = 1 V 0<br />

E0 and 0 V are the market value of equity and total value of a firm respectively, and e � und � V stand for the<br />

volatilities of the market values, r is the risk free rate, T is the maturity of the hypothetical option and D is the<br />

book value of debt. The parameters V0 and � V are unknown and need to be estimated. We solve the system of<br />

equations iteratively using Newton-Raphson approach. Based on Sreedhar & Shumway (2008) we use the sum of<br />

equity market value and total liabilities as a starting value for V 0 . The volatility � e is estimated using last 12<br />

monthly returns and the start value for � V is ) ( =<br />

E<br />

� V � e . The market value of debt is the difference between<br />

E � D<br />

V0 and market value of equity.<br />

1<br />

Exclusion of financial stocks is a common procedure due to the different accounting standards relevant for such firms and characteristically high<br />

leverage of such firms.<br />

2<br />

The survivorship bias problem is not relevant for our study.<br />

3<br />

If the calculated tax rate is higher than 39% it is replaced with this rate which is the highest tax rate in the U.S. since 1988 for incomes between<br />

$100 000 and $335 000.<br />

322


2.2 Methodology<br />

We evaluate two asset pricing models, the Capital Asset Pricing Model of Sharpe, Linter and Mossin and the Fama<br />

& French (1993) three factor model. To estimate risk parameters we use simple OLS regressions:<br />

L<br />

it<br />

f<br />

L<br />

it<br />

f<br />

i<br />

L<br />

it<br />

r � r = � � � ( E[<br />

r ] � r ) � �<br />

(3)<br />

i<br />

L<br />

it<br />

m<br />

f<br />

m<br />

L<br />

it<br />

f<br />

r � r = � � b ( E[<br />

r ] � r ) � s SMB � h HML � �<br />

(4)<br />

The levered parameters for each stock i in our sample are estimated with rolling regression using five-years of past<br />

monthly levered returns L<br />

r it . 4<br />

t<br />

rm stands for monthly market return approximated with value-weighted average of all<br />

stocks used in our study and f r for monthly risk-free return5 . Proxy portfolios SMBt and HMLt are constructed by<br />

replications of Fama & French (1993) methodology. Risk parameters of fictive unlevered firms are calculated with<br />

the help of adjustment formulas in spirit of MM and ME framework. In the case of non-zero covariance between<br />

cost of debt and proxy portfolios the formula of Conine (1980) should be applied in relation to MM propositions<br />

L<br />

it<br />

U<br />

impl<br />

it<br />

U<br />

impl<br />

it<br />

F<br />

it<br />

it<br />

L<br />

it<br />

� = � � ( � � � )(1�<br />

s ) L<br />

(5)<br />

F<br />

By the adjustment in spirit of Hamada (1972) and Bowman (1979, 1980), � it is set to 0. To unlever according to<br />

ME the following equation is to be applied<br />

F<br />

it<br />

F<br />

it<br />

it<br />

L U<br />

impl<br />

U<br />

impl<br />

F 1�<br />

r (1�<br />

sit<br />

)<br />

� it = � it � ( � it � � it )<br />

Lit<br />

(6)<br />

1�<br />

r<br />

Here stands for tax rate, r for cost of debt and Lit for leverage ratio of an individual stock at time t. Lally<br />

(2004) shows that this adjustment is analogically applicable for risk coefficients in the FF three factor model. We<br />

use the simple mean of firm-specific leverage ratio and tax rate over the estimation period in the formulas above. As<br />

U<br />

it<br />

impl<br />

U<br />

it<br />

impl<br />

U<br />

it<br />

impl<br />

U<br />

it<br />

impl<br />

a result we obtain � or b , s and h . We call this estimates, the implicit risk parametrs.<br />

In most of studies which work with the assumption of possible covariance between risk parameters and the costs of<br />

debt the latter ones are obtained with the help of credit default spreads. In contrary, we rely on relevant balance<br />

F<br />

it<br />

sheet data. To obtain our cost of debt – risk loadings � we regress the cost of debt on given proxy portfolios.<br />

Alternatively the risk parameters of an unlevered firm can be obtained directly from its unlevered returns. For MM<br />

adjustment with firm-specific market or book value of debt, tax rate and cost of debt, the following holds:<br />

while accordingly for Miles & Ezzell we have<br />

U<br />

it<br />

r<br />

L<br />

F<br />

rit<br />

� rit<br />

(1�<br />

sit<br />

) Lit<br />

=<br />

(1�<br />

(1�<br />

s ) L )<br />

4 Because of our restriction each rolling regression uses exactly 60 observations when estimating risk parameters.<br />

5 Risk-free rate of return is taken from Ken French online data library.<br />

323<br />

it<br />

it<br />

F<br />

it<br />

it<br />

t<br />

it<br />

(7)


U<br />

it<br />

r<br />

=<br />

L<br />

it<br />

r<br />

� r<br />

F<br />

it<br />

1�<br />

r<br />

F<br />

F<br />

it<br />

(1�<br />

s ) L<br />

1�<br />

r<br />

1�<br />

rit<br />

(1�<br />

sit<br />

) Lit<br />

(1�<br />

)<br />

F<br />

1�<br />

r<br />

The obtained unlevered returns are then regressed (in analogy to its levered counterparts) on proxy-portfolios of<br />

CAPM and three factor model. The resulting regression coefficients are the directly estimated risk parameters of a<br />

fictive unlevered enterprise<br />

Uˆ<br />

�it or U it b ˆ , U it<br />

s ˆ and<br />

it<br />

it<br />

F<br />

it<br />

it<br />

U<br />

hit ˆ . We can show that CAPM and three factor model have very<br />

similar explanatory power for both levered and unlevered returns on both sector and industries level. The tables are<br />

omitted in order to save space. As long as the applied models explain both fictive unlevered returns as well as the<br />

returns observable on the market with similar power, the obtained unlevered risk parameters can be interpreted as<br />

the true values of firms’ operative risk. If the adjustment formulas (5) and (6) used in the determination of the risk<br />

parameters are correct, both parameters should have similar values.<br />

3 Results<br />

The main result of our study is obtained by comparison of the implicit and directly estimated risk parameters. We<br />

analyze both prominent unlevering approaches based on Modigliani & Miller (1963) and Miles & Ezzell (1985)<br />

using various leverage estimators. Since we divided the data into industry sector portfolios, we assume that the<br />

operative risk within an industry sector or industry class is similar. Is this is the case then the standard deviation of<br />

the unlevered parameters must be significantly lower than the one obtained from levered parameters (Hamada<br />

(1972)). This fact could be verified for almost 100% of cases using the F-Test. The table has been omitted in order<br />

to save space. As described above, because the leverage has a time–varying character, considering time point<br />

estimates of the leverage ratio causes information loss when estimating operational (business) risk. The aim of our<br />

study is to verify if this error is significant. The practitioners and most researchers use the arithmetic mean to<br />

capture the leverage over the estimation period. To obtain implicit unlevered risk loadings we also use this<br />

approximation in our study. The results do not change if we consider the values from the actual period of unlevering<br />

which is also a common approach. Tables 1 and 3 summarize the results of the comparison of the implicit and direct<br />

estimated unlevered parameters for CAPM and Fama & French (1993) models. In following, we assume the direct<br />

parameters estimated from the unlevered returns are a valid measure of business risk. We observe that under the<br />

assumption that the asset pricing models hold for unlevered returns the adjustment of the levered risk parameters<br />

does not provide correct results. The influence of the variation in the leverage ratio and the tax shield cannot be<br />

captured with implicit unlevered risk parameters. For the CAPM, debt-equity ratio measured in market values and<br />

MM adjustment, we find only in 25% (8 of 32) of the cases the implicit and direct estimated unlevered parameters<br />

being not significantly different on the industry sector level. This is also the case for only two out of nine industry<br />

classes which is less than 25%. In around 80% of cases the difference is not equal to zero. Using the adjustment by<br />

Conine (1980) and estimating the cost of debt with Merton model the number of insignificant differences slightly<br />

increases, from 15,63% to 25%. Compared to MM, the ME adjustment is very sensitive to consideration of the cost<br />

of debt beta. Omitting the<br />

F<br />

� increases the number of “correct” implicit estimates of unlevered risk loadings. The<br />

reason is the important role the cost of debt plays in the ME adjustment. Even if the covariance between the cost of<br />

debt and the market risk premium is generally insignificant, it still has an impact on the accuracy of the estimates.<br />

324<br />

(8)


Modigliani/Miller Miles/Ezzell<br />

Leverage<br />

Book<br />

Corr.<br />

F<br />

K Sec. T


approximation of the market value of debt with the book value of total liabilities yields similar results as when using<br />

Merton model. Nevertheless, the advantage of using Merton model is the possibility to calculate monthly varying<br />

cost of debt.<br />

Leverage<br />

Book<br />

Corr.<br />

F<br />

K<br />

Modigliani/Miller<br />

Sec. (3) (2) Ind. (3) (2)<br />

Yes 0 (0%) 2 (6,25%) 0 (0%) 1 (11,1%)<br />

No 0 (0%) 2 (6,25%) 0 (0%) 1 (11,1%)<br />

Market<br />

HML 0 (0%) 2 (6,25%) 0 (0%) 1 (0%)<br />

Yes 0 (0%) 3 (9,38%) 0 (0%) 0 (0%)<br />

No 0 (0%) 2 (6,25%) 0 (0%) 1 (11,1%)<br />

TL to MV<br />

HML 0 (0%) 3 (9,38%) 0 (0%) 0 (0%)<br />

Yes 1 (3,13%) 2 (6,25%) 0 (0%) 0 (0%)<br />

No 1 (3,13%) 2 (6,25%) 0 (0%) 0 (0%)<br />

Book<br />

HML 1 (3,13%)<br />

Miles/Ezzell<br />

2 (6,25%) 0 (0%) 0 (0%)<br />

Yes 0 (0%) 1 (3,13%) 0 (0%) 1 (11,1%)<br />

No 0 (0%) 2 (6,25%) 0 (0%) 0 (0%)<br />

Market<br />

HML 0 (0%) 1 (3,13%) 0 (0%) 0 (0%)<br />

Yes 0 (0%) 2 (6,25%) 0 (0%) 1 (11,1%)<br />

No 0 (0%) 3 (9,38%) 0 (0%) 0 (0%)<br />

TL/MV<br />

HML 0 (0%) 2 (6,25%) 0 (0%) 0 (0%)<br />

Yes 1 (3,13%) 3 (9,38%) 0 (0%) 1 (11,0%)<br />

No 0 (0%) 3 (9,38%) 0 (0%) 0 (0%)<br />

HML 1 (3,13%) 2 (6,25%) 0 (0%) 0 (0%)<br />

Table 3: Fama/French Three Factor Model Results<br />

Table shows that for the all (or at least 2) directly estimated and implicit risk loadings the amount and share of industry sectors and classes are not<br />

significantly different. We consider 32 industry sectors from 9 industry classes. The column Corr.<br />

F<br />

F<br />

K stands for consideration of the � while<br />

calculating the implicit unlevered risk loadings. In the HML row only the covariance between cost of debt and HML proxy portfolio is<br />

considered.<br />

4 Conclusions<br />

To separate the financial and operational risk of a firm, the researchers and the practitioners use the adjustment<br />

formulas. In connection with the common asset pricing models one may use this formulas to obtain risk parameters<br />

of a hypothetically unlevered firm. The two most important adjustment formulas are based on the work of<br />

Modigliani & Miller (1958, 1962) and Miles & Ezzell (1985). In the first step, the market parameters and risk<br />

loadings for a levered firm are estimated. Next, these levered risk loadings are unlevered using adjustment formulas<br />

mentioned above. Nevertheless, the static assumption about the firm’s leverage causes an information loss. Since the<br />

company’s leverage has a time varying character, the approximation of the leverage aith a constat value causes<br />

wrong results. We conclude that the direct unlevering of the stock returns and the direct estimation of unlevered<br />

parameters out of the unlevered returns is the correct approach to obtain the operational risk. We also advise to<br />

model monthly variation in the market value of debt, since the estimation of leverage ratio in yearly intervals, e.g.<br />

using book values, causes abrupt changes of capital structure which may be incompatible with theory. Even if both<br />

adjustment formulas contain the error described above, we show that it is smaller when using Miles & Ezzell<br />

adjustement. Additionally, we that the results react only slightly on the consideration of the covariance between the<br />

cost of debt and the market portfolios.<br />

326


Leverage<br />

Book<br />

Corr.<br />

F<br />

K<br />

Modigliani/Miller<br />

MRP SMB HML<br />

Yes 0.0442 0.0495 0.0737<br />

Market<br />

No 0.0442 0.0495 0.0739<br />

Yes 0.0548 0.0575 0.0839<br />

TL to MV<br />

No 0.0539 0.0566 0.0825<br />

Yes 0.0540 0.0570 0.0827<br />

Book<br />

No 0.0540<br />

Miles/Ezzell<br />

0.0571 0.0828<br />

Yes 0.0003 0.0003 0.0005<br />

Market<br />

No 0.0015 0.0022 0.0033<br />

Yes 0.0156 0.0221 0.0340<br />

TL/MV<br />

No 0.0016 0.0022 0.0035<br />

Yes 0.0004 0.0003 0.0005<br />

No 0.0015 0.0022 0.4869<br />

Table 4: Fama/French Three Factor Model Average Root Mean Squared Error<br />

The table shows the mean squared error between the direct estimated and implicit risk loadings for the FF three factor model. We calculate the<br />

error for all stock between January 1995 and December 2008. The risk parametrs are estimated using five years rolling regression. The column<br />

Leverage shows the way of approximation of the company’s leverage.<br />

5 References<br />

Bowman R. G. (1979). The Theoretical Relationship Between Systematic Risk and Financial (Accounting)<br />

Variables. The Journal of Finance, 34(3), 617-630.<br />

Bowman R. G. (1980). The Importance of a Market Value Measurement of Debt in Assessing Leverage. Journal of<br />

Accounting Research, 18, 242-254.<br />

Bharath S., & Shumway T. (2008). Forecasting Default with the Merton Distance To Default Model, Review of<br />

Financial Studies, 21(3), 1339-1369.<br />

Conine, T. (1980). Corporate Debt and Corporate Taxes: An Extension. The Journal of Finance, 35(4), 1033-1037.<br />

Dempsey, M. (2009). The Fama-French three factor model and leverage: compatibility with the Modigliani and<br />

Miller propositions. Investment Management and Financial Innovations, 6(1), 48-53.<br />

Fama, E. F., & French, K. R. (1992). The Cross-Section of Expected Stock Returns. The Journal of Finance, 47(2),<br />

427 465.<br />

Fama, E. F., & French, K. R. (1993). Common Risk Factors in the returns on stocks and bonds. The Journal of<br />

Financial Economics, 33(1), 3-56.<br />

Fama, E. F., & French, K. R. (1997). Industry costs of equity. The Journal of Financial Economics, 43(1), 153-193.<br />

Hamada, R. (1969). Portfolio Analysis, Market Equilibrium and Corporation Finance. The Journal of Finance,<br />

24(1), 13-31.<br />

Hamada, R. (1972). The Effect of the Firm's Capital Structure on the Systematic Risk of Common Stock. The<br />

Journal of Finance, 27(2), 435-452.<br />

Lally, M. (2004). The Fama-French Model, Leverage, and the Modigliani-Miller Propositions. The Journal of<br />

Financial Research, 27(3), 341-349.<br />

327


Lintner, J. (1965). The Valuation of Risk Assets and the Selection of Risky Investments in Stocks Portfolios and<br />

Capital Budgets. Review of Economics and Statistics, 47(1), 13-37.<br />

Marston, F., & Perry, S. (1996). Implied Penalties for Financial Leverage: Theory Versus Empirical Evidence.<br />

Quarterly Journal of Business and Economics, 35, 77-97.<br />

Merton, R.C. (1974). On the Pricing of Corporate Debt: the Risk Structure of Interest Rates. The Journal of Finance,<br />

29(2), 449-470.<br />

Miles, J.A., & Ezzell, J. R. (1980). The Weighted Average Cost of Capital, Perfect Capital Markets, and Project<br />

Life: A Clarification. The Journal of Financial and Quantitative Analysis, 15(3), 719-730.<br />

Miles, J.A., & Ezzell, J. R. (1985). Reformulating Tax Shield Valuation: A Note. The Journal of Finance, 40(5),<br />

1485-1492.<br />

Modigliani, F., & Miller, M. (1958). The Cost of Capital, Corporation Finance, and the Theory of Investment.<br />

American Economic Review, 48(3), 261-297.<br />

Modigliani, F., & Miller, M. (1963). Corporate Income Taxes and the Cost of Capital: A Correction. American<br />

Economic Review, 53(3), 333-391.<br />

Mossin, J. (1966). Equilibrium in a Capital Asset Market. Econometrica, 34(4), 768-783.<br />

Schaefer, S. M., & Strebulayev, I. A. (2008). Structural models of credit risk are useful: Evidence from hedge ratios<br />

on corporate bonds. The Journal of Financial Economics, 90(1), 1-19.<br />

Sharpe, W. F. (1964). Capital asset prices: a theory of market equilibrium under conditions of risk. The Journal of<br />

Finance, 19(3), 425-442.<br />

Sweeney, R. J., Warga, A. D., & Winters, D. (1997). The Market Value of Debt, Market Versus Book Value of<br />

Debt, and Returns to Assets. Financial Management, 26(1), 5-21.<br />

328


DOES LEVERAGE AFFECT LABOUR PRODUCTIVITY? A COMPARATIVE STUDY OF LOCAL AND<br />

MULTINATIONAL COMPANIES OF THE BALTIC COUNTRIES<br />

Mari Avarmaa 1<br />

Tallinn University of Technology<br />

Akadeemia tee 3, Tallinn 12618<br />

Phone: +372 6204 057<br />

E-mail: mari_avarmaa@hotmail.com<br />

Aaro Hazak<br />

Tallinn University of Technology<br />

Akadeemia tee 3, Tallinn 12618<br />

Phone: +372 6204 057<br />

E-mail: aaro.hazak@tseba.ttu.ee<br />

Kadri Männasoo<br />

Tallinn University of Technology<br />

Akadeemia tee 3, Tallinn 12618<br />

Phone: +372 6204 057<br />

E-mail: kadri.mannasoo@tseba.ttu.ee<br />

Abstract. This paper investigates the impact of leverage on labour productivity of companies operating in the Baltic countries, with a<br />

focus on differences between local and multinational companies. We employ a fixed effects regression model on company level data,<br />

covering the period from 2001 to 2008. Our results demonstrate that the impact of leverage on labour productivity is non-linear and it<br />

differs dramatically between local and multinational companies. In the case of local companies, at low levels of leverage, an increase<br />

in external financing tends to bring along an improvement in labour productivity, while at higher levels of leverage, an increase in<br />

debt financing appears to result in a loss of labour productivity. For multinational companies, leverage does not seem to have a<br />

significant impact on labour productivity. Although debt overhang is believed to be an issue in Baltic countries in general, local<br />

companies with low leverage might be able to increase labour productivity by additional lending.<br />

Keywords: labour productivity, leverage, multinational companies, local companies, Baltic countries<br />

JEL classification: G32, D24<br />

1 Introduction<br />

The Baltic countries have been characterised by fast economic growth after regaining independence, demonstrating<br />

GDP growth rates of average 5% during 1996-20092. The increase in private sector credit has been even more<br />

striking. Between 2000 and 2007, the share of private sector credit in GDP doubled in Lithuania and tripled in<br />

Estonia and Latvia. 3 After the period of rapid growth followed by the vast economic downturn of 2008 and 2009, the<br />

question faced by the countries is, how to sustain development.<br />

The neoclassical growth theory drawn on the seminal work of Solow (1956) demonstrates that productivity<br />

growth is one of the main drivers for long-term per capita GDP growth. This relationship has found strong empirical<br />

support (e.g. Hall and Jones 1999, OECD 2003) including in the context of Baltic countries (Schadler et al. 2006;<br />

Arratibel et al. 2007). Understanding the determinants of productivity on a micro level, as well as the related<br />

challenges and opportunities in a broader context, is therefore one of the key elements for exploring the paths for<br />

economic growth.<br />

While access to credit has been largely seen as a prerequisite for economic success (King and Levine 1993), the<br />

recent lending booms have rather demonstrated the risks to company viability resulting from excessive debt<br />

financing, highlighted by the global crisis of 2008/09. The impact of leverage on productivity and long-term growth<br />

hence deserves closer scrutiny.<br />

In the decades to come, the Baltic economies will be increasingly under pressure as a result of the ageing<br />

population. As a consequence, output growth needs to be achieved with limited increase of labour force, and<br />

improvements in labour productivity are essential for sustaining growth.<br />

The Baltic countries have been successful in attracting foreign investments. Empirical evidence shows that<br />

foreign direct investments play an important role in the labour productivity growth in the region (Bijsterbosch and<br />

Kolasa 2009). It would therefore be interesting to understand the drivers of labour productivity of multinational<br />

companies operating in the Baltic countries, and to identify whether these are significantly different from the<br />

determinants of labour productivity of local companies.<br />

1 Corresponding author<br />

2 Eurostat<br />

3 European Commission 2010<br />

329


This paper focuses on company financing and ownership as determinants of labour productivity. Our aim is to<br />

study the relationship between leverage and labour productivity comparing the MNCs and local companies in the<br />

Baltic countries. Although the areas of capital structure and productivity have both been widely researched, the<br />

linkage between company financing, ownership structure and labour productivity has received limited attention in<br />

previous literature. Empirical evidence indicates that MNCs in the Baltic countries have more flexibility in their<br />

financing decisions compared to local firms as the negative impact of credit constraints on leverage is much stronger<br />

in the case of local companies (Avarmaa et al. forthcoming). We seek to investigate whether such flexibility leads to<br />

any advantages for MNCs in achieving higher labour productivity. We perform a panel data regression analysis on a<br />

sample of 3,676 Baltic companies covering the period of 2001 to 2008. According to our knowledge, this is the first<br />

empirical research on the relationships between leverage and productivity covering the three Baltic countries. We<br />

contribute to the literature by investigating whether the impact of leverage on labour productivity is different for<br />

local companies and MNCs.<br />

The article is set up as follows. The next section provides overview of the literature on the relationships<br />

between leverage and productivity, as well as on the productivity differences between foreign and local firms,<br />

Section 3 presents the regression model and data, Section 4 explains our results, and the last section concludes the<br />

paper.<br />

2 Literature overview<br />

The classics of corporate finance theories offer some predictions on the influence of leverage on productivity. The<br />

agency theory of capital structure explains that debt functions as a monitoring device over managers (Jensen and<br />

Meckling, 1976), meaning that higher debt levels might thus result in higher efficiency and productivity. The<br />

signalling theory of capital structure suggests that since better performing companies use the issuance of debt as a<br />

signal about their quality (Ross, 1977), higher debt might be associated with higher productivity. On the other hand,<br />

the debt overhang concept by Myers (1977) explicates that high leverage can cause firms to underinvest, since the<br />

benefits of new capital investments accrue largely to debt holders instead of equity holders. This leads ultimately to<br />

weaker firm performance. The law of diminishing returns shows that every additional unit of (labour) input results<br />

in a diminishing increase in output. Coricelli et al. (2010) offers a similar explanation, pointing out that excessive<br />

leverage could lead to overcapacity and therefore result in lower productivity.<br />

As regards empirical research, several works show that leverage has a negative impact on productivity. Nucci et<br />

al. (2005) find a negative relationship between leverage and productivity in a sample of Italian companies. They<br />

show that there is a negative causal relationship from firm’s leverage to its propensity to innovate, and that<br />

innovativeness leads to higher productivity. Ghosh (2009) makes a similar conclusion on a sample of Indian hightech<br />

firms. Based on their quantile regression analysis of a sample of Portuguese companies, Nunes et al. (2007) also<br />

show that the relationship between leverage and labour productivity is negative, except for the most productive firms<br />

in the case of which higher leverage tends to increase productivity. In contrast to the above papers, Kale et al. (2007)<br />

find on a sample of US companies a positive concave relationship between leverage and labour productivity, in line<br />

with the agency theory. Hossain et al. (2005) analyse the components of productivity growth in US food<br />

manufacturing industry and find that increases in dividends contribute to the productivity growth, in line with the<br />

signalling theory.<br />

Out of the limited research on the relationships between leverage and productivity as well as company<br />

ownership in Central and Eastern Europe (CEE), Coricelli et al. (2010) have focussed on the impact of leverage on<br />

total factor productivity (TFP) growth in twelve CEE countries (including Latvia) and found the relationship to be<br />

non-linear. The impact of foreign ownership on TFP growth appeared to be insignificant, except for the subsample<br />

with non-zero debt where a positive effect was found. Gatti and Love (2006) find on a Bulgarian sample that access<br />

to credit is positively associated with productivity. Moreno Badia and Slootmaekers (2008) have investigated the<br />

relationship between productivity and financial constraints in Estonia. They conclude that financial constraints do<br />

not have an impact on productivity in most sectors, with the exception of R&D, where financial constraints have a<br />

large negative impact on productivity. They find that companies with majority foreign ownership are more<br />

productive.<br />

330


Within the broad area of productivity related research, productivity differences between foreign and domestic<br />

companies have received increasing attention during the last two decades. Pfaffermayr and Bellak (2000) have<br />

summarised the main reasons for performance differences between foreign and domestic firms that have emerged<br />

from existing research. They have pointed out the following factors: the firm-specific assets (such as production<br />

process, reputation or brand) of multinational companies transferred from and to affiliates; the more narrow<br />

specialisation of foreign-owned firms due to belonging to a larger group; the access of foreign-owned companies to<br />

new technologies and opportunities for learning; different accounting practices, and different corporate governance<br />

structures. The review paper by Bellak (2004) provides a detailed discussion on the sources of productivity gaps<br />

between foreign-owned and domestic companies.<br />

Empirical evidence on the productivity gap between foreign and domestic corporations is mixed, while the<br />

existence of such a gap tends to be supported. Girma et al. (1999) find that there is a productivity and wage gap<br />

between foreign and domestic firms in the manufacturing sector of the UK. Oulton (1998a) finds labour productivity<br />

of foreign manufacturing plants to be higher compared to the UK-owned plants as well as labour productivity of<br />

foreign companies to be better in the non-manufacturing sector in the UK (1998b). Greenaway et al. (2009) show<br />

that there is a U-shape relationship between foreign ownership and productivity in China, suggesting that foreign<br />

ownership is associated with improved performance only as long as it is accompanied by some degree of local<br />

participation. In their quantile regression analysis of foreign-owned and domestic corporations in Greece, Dimelis<br />

and Louri (2002) found that in the middle-productivity range, foreign firms exhibit higher efficiency while foreign<br />

ownership does not matter among the very productive and least productive firms. Nunes et al. (2007) show that<br />

foreign ownership increases labour productivity for all but the least productive firms in Portugal. In their plant-level<br />

comparative analysis of labour productivity in foreign and domestic establishments in Canada, Globerman et al.<br />

(1994), however, found no significant differences of productivity between these two groups after controlling for<br />

factors such as size and capital intensity.<br />

An area of research directly related to productivity differences is the study of productivity spillovers where the<br />

focus is on the indirect benefits of FDI to productivity in the host country. Generally, productivity spillovers are said<br />

to take place when the entry or presence of MNCs lead to productivity or efficiency benefits in the host country’s<br />

local firms, and the MNCs are not able to internalize the full value of these benefits. (Blomström and Kokko 1998).<br />

In this area, several studies have been performed based on data from CEE countries. Vahter (2004) has studied the<br />

productivity spillovers in the manufacturing sector of Slovenia and Estonia and found that in both countries foreign<br />

firms exhibit higher labour productivity compared to domestic companies. Positive spillover effects were found only<br />

in Slovenia. Vahter and Masso (2006) have studied spillover effects in Estonia for 1995-2002 and found that<br />

foreign firms demonstrate higher total factor productivity than domestic firms, the results regarding the existence of<br />

spillover effects were mixed. Gersl et al. (2007) have investigated productivity spillovers in CEE countries and<br />

found that the effects differ across countries and depend on various firm-, industry- and country-specific<br />

characteristics.<br />

Some authors have considered the impact of financing when analysing productivity gaps. Explaining the higher<br />

productivity of foreign firms operating in UK compared to the local companies, Oulton (1998b) has pointed out that<br />

local companies might face higher cost of capital than foreign-owned companies while foreign companies are likely<br />

to be less constrained by the UK financial markets. Analyzing productivity gaps in Greece, Dimelis and Louri<br />

(2002) have found a positive and significant effect of leverage as one of the control variables on labour productivity.<br />

Greenaway et al. (2009), on the other hand, have found no significant relationship between leverage and labour<br />

productivity on a Chinese sample of foreign and local companies.<br />

In our study, we seek to link the relationship between financing and productivity with productivity gaps<br />

between foreign (multinational) and local companies. Our main focus is on the impact of leverage on productivity of<br />

MNCs and local companies in the Baltic countries.<br />

331


3 The data and model<br />

3.1 The model<br />

We use panel data regression analysis to study the determinants of labour productivity. Drawing on the work of<br />

Dimelis and Louri (2002), we use an augmented version of Cobb-Douglas production function for our empirical<br />

model. Like these authors, we have included leverage as one of the independent variables. In order to allow for the<br />

differences in the impact of leverage on productivity between multinational and local companies, we have included<br />

and interaction term between leverage and dummy variable for the multinational companies. The model is<br />

complemented with additional control variables derived from the findings of previous research. We model labour<br />

productivity of an i-th company at time t as follows:<br />

Log(Y/Lit) = β1LEVit +β2LEV 2 it + β3LEV×MNCit +β4LEV 2 ×MNCit +β5AGEit + β6AGE 2 it+ +β7SIZEit + β8TANGit +<br />

β9HHIit + β10LEV×SKILLit+αi + uit<br />

where α denotes firm-level fixed effects. The variables are explained in Table 1 below.<br />

Variable Abbreviatio<br />

n<br />

Measurement Expected sign<br />

Labour Productivity ln(Y/L) Ln (Sales/number of employees) dependent variable<br />

Adjusted Leverage LEV (Short-term debt+Long-term liabilities)/(Total assets-<br />

Current liabilities+Short-term debt)<br />

Long-term Leverage LEV Long-term debt/ (Total assets-Current liabilities+Shortterm<br />

debt)<br />

non-linear<br />

non-linear<br />

Age AGE Number of years from incorporation non-linear<br />

Size SIZE Ln of total assets +<br />

Tangibility TANG Fixed assets/Total Assets -<br />

Herfindahl index HHI Squared sum of market shares in all firms in the industry<br />

based on 2-digit US SIC codes<br />

Skill-intensive<br />

industry<br />

SKILL 1 if belonging to skill-intensive industry, otherwise 0 +<br />

Multinationality MNC 1 if more than 50% owned by a foreign company,<br />

otherwise 0<br />

+<br />

Table 1. Variables used in the regression model<br />

We employ a fixed effects model since it helps to control for unobserved heterogeneity between the firms that is<br />

constant over time and correlated with independent variables. The Hausman test showed that a fixed effects model<br />

was to be preferred to a random effects model. Robust standard errors have been employed, which control for the<br />

bias in the presence of heteroskedasticity and for the within-cluster serial correlation.<br />

There are various ways for measuring productivity. Syverson (2010) has brought out issues related to the<br />

measurement choice, concluding that the results of previous productivity research are generally not sensitive to the<br />

method of measuring productivity. The most common measure of productivity in company-level research appears to<br />

be TFP (e.g. Nucci et al. 2005, Ghosh 2009, Chen 2010, Coricelli et al. 2010). We however concentrate on studying<br />

labour productivity as one of the key factors for economic growth under the aging population. Several previous<br />

works on productivity have used value added per employee for measuring labour productivity (Globerman et al.<br />

1999, Oulton 1998a, 1998b, Doms and Jensen 1998, Girma et al. 1999). Due to data limitations, we have not been<br />

able to calculate value added for our data set. Therefore, logarithm of sales per employee was used as a measure of<br />

productivity (Y/L), similarly to Dimelis and Louri (2002) and Pfaffermayr and Bellak (2000). Since sales are<br />

332<br />

+


influenced by inflation, real sales figures have been used. In order to arrive at real sales, industry-level price-index<br />

deflators obtained from Eurostat have been used.<br />

The main independent variable of interest is leverage (LEV). We have used two alternative measures for<br />

leverage in our regression. First, we have included an adjusted measure of leverage, calculated similarly to several<br />

studies on capital structure (Rajan and Zingales 1995, Jog and Tang 2001, Huizinga et al. 2008). This measure takes<br />

into account that some assets on balance sheet are offset by specific non-debt liabilities. Previous studies on<br />

productivity have used either the ratio of short and long term debt to net worth (Dimelis and Louri 2002) or the ratio<br />

of total liabilities to total assets (Greenaway et al. 2009, Weill 2008) to calculate leverage. We believe that our<br />

approach represents a more appropriate measurement of leverage. To make a consideration of the specifics of longterm<br />

financing compared to short term financing, we have employed long-term leverage as the alternative to<br />

adjusted leverage. While long-term investments should generally be financed from long-term financial resources,<br />

long-term debt could be more difficult to obtain compared to short-term debt. We have used the same denominator<br />

for the long-term leverage as for the adjusted leverage due to the above mentioned advantages of such measurement.<br />

As some of the previous works have identified a non-linear relationship between leverage and labour productivity<br />

(see Appendix 1), we have included both leverage (LEV), and squared leverage (LEV2), in our regression model.<br />

Possible endogeneity of leverage was tested with Davidson-MacKinnon test and the exogeneity of leverage was<br />

supported.<br />

Previous literature has brought out that larger firms tend to benefit from economies of scale. A comprehensive<br />

discussion of the reasons for the positive impact of company size on productivity is offered by Leung et al. (2008).<br />

Empirical evidence confirms this positive relationship (Dimelis and Louri 2002, Greenaway et al. 2009, Moreno<br />

Badia and Slootmaekers 2008). Company size has been measured in previous research by the logarithm of total<br />

assets (Dimelis and Louri 2002, Greenaway et al. 2009) or by the number of employees (Kale et al. 2007, Hazak and<br />

Männasoo 2010). We prefer the logarithm of total assets as in our case labour productivity is calculated based on the<br />

number of employees. In order to eliminate the impact of inflation, real values of assets have been used.<br />

In order to control for impact of the capital factor, we have included tangibility in the regression. The results of<br />

previous research are inconclusive regarding the relationship between tangibility and productivity. Weill (2008) has<br />

found a negative relationship between tangibility and cost efficiency in all the seven European countries included in<br />

his sample. In addition to industry effects, he explains the relationship with the fact that a higher tangibility level<br />

means lower working capital and therefore lower managerial performance. Greenaway et al. (2009) have found a<br />

negative relationship between TFP and tangibility in China, while the influence of tangibility on labour productivity<br />

remained insignificant. In their quantile regression analysis on a sample of Portuguese firms, Nunes et al. (2007)<br />

found a negative relationship between tangibility and labour productivity in most cases, except for the firms with<br />

very high productivity. They explain the outcome with the tendency that firms with high R&D investments tend to<br />

have less fixed assets. Chen (2010), on the other hand, has found a positive relationship between collateral<br />

(measured by tangible fixed assets by total assets) and TFP in China but the magnitude of the impact was small. She<br />

concludes that firms’ ability to collateralise external borrowing can improve their productivity.<br />

As productivity is considered to vary by the overall level of innovativeness in the industry, the impact of<br />

leverage is observed separately for skill-intensive and non-skill intensive industries. We constructed a dummy<br />

variable for skill-intensive industries (SKILL) and interacted this with the leverage variable (LEV×SKILL). The<br />

classification of industries is based on the Pavitt taxonomy (Pavitt 1984) whereby industries are divided to four<br />

classes – scale intensive, specialised suppliers, science based, and suppliers dominated. We consider the first three<br />

classes as skill-intensive. The concordance between the two-digit US SIC codes and Pavitt’s categories is made<br />

based on Greenhalgh and Rogers (2004). For the industries missing from the latter paper, we have used the<br />

classification according to NACE codes from Pianta and Bogliacino (2008). Since level of labour productivity tends<br />

to be industry-specific, we also control for the impact by including interaction terms between year and sector<br />

dummies. For that purpose, industries are divided to four sectors (manufacturing, trade, construction, and service).<br />

Productivity is considered to be influenced by product market competition. A comprehensive discussion on the<br />

impact of competition on productivity is provided by Vahter (2006). He shows that in the empirical literature, a<br />

positive relationship between competition and productivity is generally found. To control for the intensity of product<br />

market competition, we have included the Herfindahl index (HHI) as an independent variable, similarly to Kale et<br />

al. (2007). The index is calculated as a squared sum of market shares of all firms in the industry based on the 2-digit<br />

333


US SIC-codes. However, as Vahter (2006) has pointed out, the Herfindahl index is based on a certain classification<br />

of industries and thus could be misleading. Since there is no other appropriate proxy for competition available we<br />

have used HHI despite the mentioned shortcoming.<br />

We have divided the sample into two subsets – multinational and non-multinational companies. If more than<br />

50% of a company is directly owned by a foreign company, it is classified as a multinational company (MNC) and<br />

otherwise as a non-multinational (i.e. local) company. The terms “local company” and “non-multinational company”<br />

are used interchangeably in this paper.<br />

As the main focus of our article is the impact of leverage on productivity in the comparative perspective of<br />

multinational and local firms, and considering that multinationality does not vary much over time, we have<br />

interacted the MNC dummy with leverage (LEV×MNC) and the squared term of leverage (LEV2×MNC). In order<br />

to test whether the coefficients for leverage and leverage squared are significantly different for MNCs and local<br />

companies, the Chow test was performed. The independent variables were interacted with the MNC dummy and the<br />

interaction terms were included in the regression. The null hypothesis that the coefficients are equal was rejected<br />

with 5% significance.<br />

3.2 Data<br />

We have extracted data on companies operating in Estonia, Latvia and Lithuania from the Amadeus database<br />

compiled by Bureau van Dijk. The database provides financial statements and information regarding the ownership<br />

structure of private and publicly owned European companies. Our sample covers the period from 2001 to 2008.<br />

Companies in the public utilities and financial sector (US SIC codes 4000-4999 and 6000-6999) are excluded from<br />

the analysis due to their fundamentally different financial structure. Branches of foreign companies, cooperative<br />

companies and partnerships are also excluded from the sample since their legal form makes financial decisions<br />

different from regular limited liability companies. Similarly to Weill (2008), unconsolidated data are used. For every<br />

company, data are included in the sample for those years for which financial information was available in sufficient<br />

level of detail and all components of assets and liabilities were non-negative. In order to avoid the unjustified<br />

influence of outliers to the regression results, the upper 2% of observations of labour productivity was eliminated.<br />

For the same reason, for companies established before 1991, we have counted their age starting from year 1991<br />

when the Baltic countries regained their independence and the regulatory frameworks for operating a company were<br />

fundamentally changed. In case ownership data were missing for a certain year, the latest available information on<br />

ownership was used. The companies for which no data on the number of employees were available were dropped<br />

from the sample.<br />

As our focus is on the analysis of the labour productivity of multinational companies compared to nonmultinationals,<br />

we aimed to have an equal number of multinational and non-multinational companies in the sample.<br />

We therefore included all multinational companies that met our criteria and randomly selected the same number of<br />

local companies from each of the three countries. As a result, our sample consists of 18,401 company-year<br />

observations whereof 50% belong to multinational companies. 50% of observations are from Estonia, 26% from<br />

Latvia, and 24% from Lithuania. The total number of companies included in the sample is 3,676.<br />

Appendix 2 provides descriptive statistics regarding the two subsamples. On average, MNCs appear to be twice<br />

as productive as the non-multinational companies operating in the Baltic countries – the mean value of real labour<br />

productivity of non-multinationals is 83 thousand euros per employee compared to 152 thousand euros in<br />

multinationals. It becomes evident that MNCs are overall considerably bigger than local companies in terms of<br />

sales, assets and headcount but are relatively less leveraged and carry relatively less tangible assets. As discussed in<br />

Section 4, the different size and productivity levels for MNCs compared to local companies tend to have impact on<br />

the relationship between leverage and labour productivity.<br />

Average labour productivity by company age is presented in Figure 1. The figure reveals that for both local<br />

companies and MNCs labour productivity increases rapidly after the start-up phase and starts decreasing gradually<br />

thereafter.<br />

334


Figure 1. Labour productivity (in thousands of euros) by company age<br />

Average labour productivity has been constantly growing throughout the nine years in the Baltic countries<br />

(Figure 2), especially for local companies. At the same time, the level of leverage has not increased considerably.<br />

There is a slight upward trend for both adjusted leverage and long-term leverage of local companies for the boom<br />

years of 2005 to 2007, and a drop in 2008, in relation with the financial and economic crisis. Trends of leverage of<br />

MNCs, on the other hand, are relatively stable throughout the years under review.<br />

Figure 2. Average labour productivity (in thousands of euros) and leverage of multinational and local companies by years.<br />

Average labour productivity calculated for ten leverage brackets with a step of 10% (Figure 3) indicates that the<br />

relationship between leverage and labour productivity tends to be non-linear and the nature of this relationship<br />

seems to differ for local and multinational companies. The nature of this relationship is to be studied in a more<br />

sophisticated regression analysis, presented in the next section.<br />

335


Labour productivity<br />

4 Results<br />

180<br />

160<br />

140<br />

120<br />

100<br />

80<br />

60<br />

40<br />

20<br />

0<br />

Labour productivity by levels of<br />

adjusted leverage<br />

10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

No of local No of MNC<br />

Productivity of local Productivity of MNC<br />

2,000<br />

1,500<br />

1,000<br />

500<br />

0<br />

Number of observations<br />

L a bo ur pr o duc tiv ity<br />

180<br />

160<br />

140<br />

120<br />

100<br />

80<br />

60<br />

40<br />

20<br />

0<br />

Labour productivity by levels of longterm<br />

leverage<br />

10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

No of local No of MNC<br />

Productivity of local Productivity of MNC<br />

Figure 3. Average labour productivity (in thousands of euros) by levels of adjusted and long-term leverage<br />

In our panel regression analysis, we find support for the prediction that the relationship between leverage and labour<br />

productivity in the Baltic countries is non-linear (See table 2 below). Namely, results for Model 1 show that at low<br />

levels of adjusted leverage, increase in debt tends to bring along an increase in labour productivity, while in highly<br />

leveraged companies an increase in debt financing appears to lead to a decrease in labour productivity. This outcome<br />

is similar to Kale et al. (2007) who find a non-linear relationship between leverage and labour productivity on a<br />

sample of US companies. Kale et al. (2007) argue that debt functions as a disciplining mechanism up to a certain<br />

breakpoint starting from where the threat of financial distress or underinvestment due to debt overhang problem<br />

begins to outweigh the incentives from the bonding mechanism. We believe that the positive coefficient of leverage<br />

might also show that the lack of debt financing sets limits to companies’ productivity. In case of long-term leverage<br />

(Model 2) the relationship between leverage and labour productivity is also non-linear. The squared term of leverage<br />

is negative and significant while leverage remains insignificant, indicating that long-term leverage tends to have a<br />

negative impact on leverage.<br />

Labour Productivity No interaction terms With interaction terms Subsample with LEV>0<br />

Adjusted<br />

leverage (1)<br />

Long-term<br />

leverage (2)<br />

Adjusted<br />

leverage (3)<br />

Long-term<br />

leverage (4)<br />

Adjusted<br />

leverage (5)<br />

2,000<br />

1,500<br />

1,000<br />

500<br />

0<br />

N um ber o f o bse r v a tio ns<br />

Long-term<br />

leverage (6)<br />

Leverage 0.15** 0.08 0.35*** 0.24** 0.27** 0.22*<br />

Leverage 2<br />

(0.08) (0.08) (0.10) (0.10) (0.11) (0.12)<br />

-0.38*** -0.33*** -0.54*** -0.50*** -0.43*** -0.40**<br />

(0.08) (0.11) (0.11) (0.15) (0.12) (0.16)<br />

Leverage×MNC -0.41*** -0.35** -0.30** -0.12<br />

(0.14) (0.16) (0.15) (0.18)<br />

Leverage 2 ×MNC 0.33** 0.37* 0.22 0.08<br />

(0.17) (0.32) (0.17) (0.24)<br />

Age -0.01 -0.01 -0.01 -0.01 -0.02 -0.01<br />

Age 2<br />

(0.03) (0.03) (0.03) (0.03) (0.03) (0.02)<br />

0.00 0.00 0.00 0.00 0.00 0.00<br />

(0.00) (0.00) (0.00) (0.00) (0.00) (0.00)<br />

Tangibility -0.56*** -0.56*** -0.57*** -0.57*** -0.61*** -0.59***<br />

(0.05) (0.05) (0.05) (0.05) (0.06) (0.07)<br />

336


Size 0.34*** 0.34*** 0.34*** 0.33*** 0.28*** 0.27***<br />

(0.02) (0.02) (0.02) (0.02) (0.02) (0.02)<br />

HHI -0.07 -0.06 -0.06 -0.06 0.06 0.02<br />

(0.15) (0.15) (0.15) (0.15) (0.17) (0.24)<br />

Leverage×skill 0.10* 0.05 0.11* 0.07* 0.06 -0.07<br />

(0.06) (0.07) (0.06) (0.07) (0.06) (0.08)<br />

Constant 1.93*** 1.96*** 1.96*** 1.97*** 2.34*** 2.44***<br />

(0.25) (0.25) (0.25) (0.24) (0.27) (0.27)<br />

No of obs 18 401 18 401 18 401 18 401 14 458 10 371<br />

R 2<br />

0,92 0,92 0,92 0,92 0,93 0,93<br />

Company fixed effects yes yes yes yes yes yes<br />

Sector-year interactions yes yes yes yes yes yes<br />

Year dummies yes yes yes yes yes yes<br />

Table 2. Regression Results.4<br />

Our results indicate that the relationship between financing and labour productivity is considerably different for<br />

MNCs compared to local companies. For adjusted leverage as well as long-term leverage (Models 3 and 4<br />

respectively), interaction term for leverage of MNCs is negative and significant, while the coefficient for interaction<br />

term between squared leverage and MNC is positive and significant. This implies that labour productivity of MNCs,<br />

in contrast to local companies, appears to decrease slightly as a reaction to increased leverage. The relationship is<br />

illustrated in Figure 4. At high levels of leverage the negative impact of leverage on labour productivity is less<br />

severe in case on multinational companies compared to the local companies.<br />

The leverage breakpoint starting from where the impact of adjusted leverage for local companies becomes<br />

negative is 42%, while the average level of adjusted leverage for local companies is 33% and the median value 27%.<br />

Thus, for more than half of the observations, additional leverage might bring along improvements of labour<br />

productivity. On the other hand, for MNCs additional leverage does not seem to have any positive impact on labour<br />

productivity.<br />

4 *, ** and *** indicate significance at 10%, 5%, and 1% level respectively. Robust standard errors in parentheses.<br />

337


N u m b e r o f o b s e r v a tio n s ( c o lu m n s )<br />

2,000<br />

1,800<br />

1,600<br />

1,400<br />

1,200<br />

1,000<br />

800<br />

600<br />

400<br />

200<br />

0<br />

2,000<br />

1,800<br />

1,600<br />

1,400<br />

1,200<br />

1,000<br />

800<br />

600<br />

400<br />

200<br />

0<br />

MNCs<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

Leverage<br />

Local companies<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

180<br />

170<br />

160<br />

150<br />

140<br />

130<br />

100<br />

90<br />

80<br />

70<br />

60<br />

50<br />

L a b o u r p r o d u c t iv ity ( lin e ) , E U R in th o u s a n d s<br />

N u m b e r o f o b s e r v a t io n s ( c o lu m n s )<br />

1,800<br />

1,600<br />

1,400<br />

1,200<br />

1,000<br />

800<br />

600<br />

400<br />

200<br />

0<br />

1,600<br />

1,400<br />

1,200<br />

1,000<br />

800<br />

600<br />

400<br />

200<br />

MNCs<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

0<br />

Local companies<br />

0% 10% 20% 30% 40% 50% 60% 70%<br />

Leverage<br />

80% 90% 100%<br />

Figure 4. Impact of long-term leverage (left chart) and adjusted leverage (right chart) on labour productivity of MNCs and local companies.<br />

The above relationship indicates that the availability of debt financing does not considerably limit the<br />

productivity of MNCs operating in the Baltics unlike local companies. On the other hand, excess leverage does not<br />

appear to jeopardize the productivity of MNCs as severely as local companies. A possible explanation for the<br />

different impact of leverage on the labour productivity of MNCs might be that in their case the disciplining role of<br />

debt is weaker compared to local companies. Belonging to a corporate group, MNCs are able to utilise intra-group<br />

financial resources being therefore less dependent from external debt providers. In addition, the part of financing<br />

that comes in the form of intra-group lending might not function as a monitoring device. Also, the threat of financial<br />

distress is in general lower for MNCs due to potential support from the corporate group. As the size of the<br />

operations of the subsidiaries of multinational groups’ in the Baltic countries tends to be relatively small compared<br />

to the size of the entire group, providing financing for such operations is not likely to be significantly constrained. In<br />

some cases, maintaining presence in the Baltic market might be of higher priority for corporate groups than<br />

improving short-run results.<br />

Another factor that is likely to cause differences in the impact of leverage on labour productivity of MNCs<br />

compared to local companies is the relatively larger size of MNCs. According to the classical theory of diminishing<br />

marginal return, an additional unit of labour input tends to result in a decreasing growth of output. Thus, there is less<br />

potential for productivity growth, and probably less opportunities to impact this by additional leverage.<br />

As mentioned in Section 3, compared to local companies, MNCs operate at much higher productivity levels<br />

starting from the establishment (Figure 1). The higher productivity may be related to the advantages of MNCs<br />

described in Section 2. It is therefore more difficult for MNCs to achieve additional improvements in productivity<br />

by financing additional investments.<br />

As regards the control variables, the relationship between labour productivity and company size is positive, as<br />

expected, reflecting the existence of economies of scale in terms of labour productivity. A 1% change in assets<br />

appears to result in a 0.34% change in productivity.We find the relationship between tangibility and labour<br />

productivity to be negative. In our sample, a 1% reduction in tangibility results in an increase of labour productivity<br />

by 0.57%. This might be explained by trade-off theory of capital structure (Kraus and Litzenberger 1973) whereby<br />

firms rich in intangible assets have less collateral, higher bankruptcy risk and are thus less leveraged. At the same<br />

time, innovative firms are proved to be highly productive (Egger and Keuschnigg 2010). The interaction term<br />

between leverage and high-skilled industries is positive, supporting the argument that skill-intensive sectors seem to<br />

benefit more from higher leverage than others. This could be related to the fact that their tendency to innovate<br />

338<br />

180<br />

170<br />

160<br />

150<br />

140<br />

130<br />

100<br />

90<br />

80<br />

70<br />

60<br />

50<br />

L a b o u r p r o d u c ti v ity ( l in e ) , E U R in th o u s a n d s


creates a need for higher financing while the innovative activities might not be transparent for outside agents, and<br />

innovative firms are therefore credit rationed (Egger and Keuschnigg 2010). Our outcome is similar to the findings<br />

of Moreno Badia and Slootmakers (2008) who found that credit constraints limit productivity in the R&D industry<br />

in Estonia. However, in our case, skill-intensive industries include more industries than R&D. Company age and the<br />

Herfindahl index remained insignificant in explaining labour productivity. In case of the Herfindahl index, this<br />

might be related to the shortcomings of proxying as described in Section 3.<br />

We have tested robustness of the results by running the regression on a limited subsample where the leverage of<br />

observations is above zero. In case of adjusted leverage (Model 5), the outcome remained the same with the<br />

exception that the interaction term between MNC and squared leverage is insignificant in explaining labour<br />

productivity for the subsample, potentially explained by the small size of the subsample. For long-term leverage<br />

(Model 6) the impact of leverage also stayed the same while the interaction terms remained insignificant since for<br />

this regression the sample size was almost two times smaller compared to the main sample.<br />

5 Summary<br />

We found the impact of leverage on labour productivity to be considerably different for MNCs and local companies<br />

operating in the Baltic countries. While there appears to be a positive concave relationship between leverage and<br />

labour productivity for local companies, the impact is slightly negative in case of MNCs. We show that at moderate<br />

levels of leverage (up to an adjusted leverage of 42%), lending tends to have a positive impact on labour<br />

productivity of local companies in the Baltic countries. On the other hand, at high levels of leverage, there appears<br />

to be a severe negative impact of leverage on labour productivity.<br />

For MNCs additional leverage does not tend to bring along any improvements in labour productivity, while at<br />

high levels of leverage, the negative impact of leverage on labour productivity is not as harmful as in case of local<br />

companies. The different impact of leverage on MNCs can be explained by the weaker role of debt as a monitoring<br />

device, lower bankruptcy risk, and the lower marginal productivity related to the higher productivity levels and<br />

relatively bigger size of the multinational companies.<br />

Although debt overhang problem is considered to be threatening the Baltic economies (Hertzberg 2010), for<br />

many local companies, additional leverage might bring along improvements of labour productivity. This finding<br />

may be of interest to companies, financing institutions and policy makers.<br />

6 Acknowledgements<br />

We are grateful to Professor Karsten Staehr, Dr Juan Carlos Cuestas, and the participants of the TTÜ doctoral<br />

seminar for valuable comments and to the Estonian Science Foundation (grant no ETF8796) for financial support.<br />

7 References<br />

Arratibel, O., Heinz, F. Martin, R., Przybyla, M., Rawdanowicz, L., Serafini, R., Zumer, T. (2007). Determinants of<br />

growth in the central and eastern European EU member states - a production function approach. European<br />

Central Bank. Occasional Paper Series. No. 61.<br />

Avarmaa, M., Hazak, A., Männasoo, K. (2011). Formation of the capital structures of multinational and local<br />

companies in the Baltic countries. Baltic Journal of Economics. Forthcoming.<br />

Bellak, C. (2004). How Domestic and Foreign Firms Differ and Why Does it Matter? Journal of Economic Surveys,<br />

18 (4), 483-514.<br />

Bijsterbosch, M., Kolasa, M. (2009). FDI and productivity convergence in central and eastern Europe an industrylevel<br />

investigation. ECB, Working Paper Series. No 992.<br />

Blomström, M. Kokko, A. (1998). Multinational corporations and spillovers. Journal of Economic Surveys. 12 (2),<br />

247-278.<br />

Castellani, D., Zanfei, A. (2004). Multinationals, Innovation and Productivity. Evidence from Italian Manufacturing<br />

Firms. University of Urbino. http://www.fscpo.unict.it/pdf/Castellani%20(paper).pdf (06.12.2010)<br />

339


Chen, M. (2010). Financial Effects and Firm Productivity: Evidence from Chinese Manufacturing Data. University of<br />

Nottingham. 9 th Annual Postgraduate Conference, www.nottingham.ac.uk/GEP.<br />

Coricelli, F., Driffeld, N., Pal, S., Roland, I. (2010) Leverage and Productivity Growth in Emerging Economies: Is<br />

There A Threshold Effect? Brunel University Economics and Finance Working Paper Series, Working<br />

Paper No. 10-21.<br />

Dimelis, S., Louri, H. (2002). Foreign Ownership and Production Efficiency: a Quantile Regression Analysis. Oxford<br />

Economic Papers, 54, 449-469.<br />

Doms, M., and Jensen, B. (1998). Comparing Wages, Skills, and Productivity between Domestically and Foreign<br />

Owned Manufacturing Establishments in the United States. In Geography and Ownership as Bases for<br />

Economic Accounting, 235-258.<br />

Egger, P. Keuschnigg, C. (2010). Innovation, Trade, and Finance. University of St. Gallen. Working Paper Series.<br />

Working paper 2010-08.<br />

Eurostat. http://epp.eurostat.ec.europa.eu/portal/page/portal/eurostat/home/ (12.02.11)<br />

European Commision. (2010). Economic Policy Challenges in the Baltics. Cross-Country Study. Occasional Papers,<br />

58<br />

Gatti, R., Love, I. (2006). Does Access to Credit Improve Productivity? Evidence from Bulgarian Firms. World<br />

Bank, Policy Research Paper No. 3921.<br />

Gersl, A., Rubene, E., Zumer, T. (2007). Foreign direct investment and productivity spillovers: Updated evidence<br />

from the Central and Eastern European countries. Czech National Bank, Working Paper Series, 8/2007.<br />

Girma, S., Greenaway, D., Wakelin, K. (1999). Wages, Productivity and Foreign Ownership in UK Manufacturing.<br />

Centre for Research on Globalization and Labour Markets, Research Paper No. 99/14.<br />

Ghosh, S. (2009) Productivity and Financial Structure: Evidence from Indian High-Tech Firms. MPRA Paper, No.<br />

19467.<br />

Globerman, S., Ries, J., Vertinsky, I. (1994). The Economic Performance of Foreign Affiliates in Canada. The<br />

Canadian Journal of Economics, 27 (1), 143-156.<br />

Greenaway, D., Guariglia, A., Yu, Z. (2009). The More the Better? Foreign Ownership and Corporate Performance<br />

in China. University of Nottingham, Research Paper No. 2009/05.<br />

Greenhalgh, C., Rogers, M. (2004). The Value of Innovation: The Interaction of Competition, R&D and IP.<br />

University of Oxford, Department of Economics. Economics Series Working Papers, 192.<br />

Hall, R., Jones, C. (1999). Why Do Some Countries Produce So Much More Output per Worker than Others? The<br />

Quarterly Journal of Economics, 114 (1), 83-116.<br />

Hazak, A., Männasoo, K. (2010). Indicators of Corporate Default – An EU Based Empirical Study. Transformations<br />

in Business & Economics, 9 (1), 62–76.<br />

Herzberg, V. (2010). Assessing the Risk of Private Sector Debt Overhang in the Baltic Countries. IMF Working<br />

Paper. WP/10/250.<br />

Hossain, F., Jain, R., Govindasamy, R. 2005. Financial Structure, Production and Productivity: Evidence from the<br />

U.S. Food Manufacturing Industry. Agricultural Economics, Vol. 33, No.S3, 399-410<br />

Huizinga, H., Laeven, L., Nicodame, G. (2008). Capital Structure and International Debt Shifting. Journal of<br />

Financial Economics, 88 (1), 80-118.<br />

Jensen, M., Meckling, W. (1976). Theory of the Firm: Managerial Behavior, Agency Costs, and Ownership<br />

Structure. Journal of Financial Economics, 3 (4), 305-360.<br />

Jog, V., Tang, J. (2001). Tax Reforms, Debt Shifting and Tax Revenues: Multinational Corporations in Canada.<br />

International Tax and Public Finance, 8 (1), 5-25.<br />

Kale, J., Ryan, H. Wang, L. (2007). Debt as a Bonding Mechanism: Evidence from the Relations between Employee<br />

Productivity, Capital Structure, and Outside Employment Opportunities. 18th Annual Conference on<br />

Financial Economics and Accounting, NYU, http://w4.stern.nyu.edu/salomon/docs/conferences/Kale-Ryan-<br />

Wang-.pdf. (06.12.2010)<br />

King, R., Levine, R. (1993). Finance and growth: Schumpeter might be right. Quarterly Journal of Economics, 108<br />

(3), 717-737<br />

Kraus, A. Litzenberger, R. (1973). A State Preference Model of Optimal Financial Leverage. Journal of Finance, 28,<br />

911-922.<br />

Moreno Badia, M., Slootmaekers, V. (2008). The Missing Link Between Financial Constraints and Productivity.<br />

LICOS Discussion Paper Series, Discussion Paper 208/2008.<br />

Myers, S. (1977). Determinants of Corporate Borrowing, Journal of Financial Economics, 5, 147-75<br />

Myers, S. (1984). Capital Structure Puzzle. NBER Working Paper Series, No. 1393.<br />

340


Nucci, F., Pozzolo, A., Schivardi, F. (2005). Is firm’s productivity related to its financial structure? Evidence from<br />

microeconomic data. Rivista di Politica Economica, SIPI Spa, 95(1), 269-290.<br />

Nunes, P., Sequera, T., Serrasqueiro, Z. (2007). Firms’ leverage and labour productivity: a quantile approach in<br />

Portugese firms. Applied Economics, 39 (14), 1783-1788.<br />

OECD. The Sources of Economic Growth in OECD Countries.<br />

http://www.oecd.org/dac/ictcd/docs/otherdocs/OtherOECD_eco_growth.pdf<br />

Oulton, N. (1998a). Investment, Capital and Foreign Ownership in UK Manufacturing. NIESR Discussion Paper. No<br />

141.<br />

Oulton, N. (1998b). Labour Productivity and Foreign Ownership in the UK. NIESR Discussion Paper. No 143.<br />

Pavitt, K. (1984). Sectoral Patterns of Technical Change: Towards a Taxonomy and Theory. Research Policy. 13,<br />

343–373<br />

Pfaffermayr, M., Bellak, C. (2000). Why Foreign-Owned Firms Are Different. A Conceptual Framework and<br />

Empirical Evidence for Austria. HWWA Discussion Paper Series.<br />

Pianta, M., Bogliacino, F. (2008). The Impact of R&D and Innovation on Economic Performance and Employment:<br />

A Quantitative Analysis Based on Innovation Survey Data, University of Urbino, Faculty of Economics.<br />

Rajan, R., Zingales, L. (1995). What Do We Know about Capital Structure? Some Evidence from International Data.<br />

Journal of Finance, 50 (5), 1421-1460.<br />

Ross, S. (1977). The Determination of Financial Structure: The Incentive Signaling Approach. Bell Journal of<br />

Economics, 8, 23-40<br />

Schadler, S. Mody, A., Abiad, A., Leigh, D. (2006). Growth in the Central and Eastern European countries of the<br />

European Union. IMF. Occasional Paper. No 252.<br />

Solow, R. M. (1956). A Contribution to the Theory of Economic Growth. Quarterly Journal of Economics, 70, 65-94<br />

Syverson, C. (2010). What Determines Productivity. NBER Working Paper Series, No 15712.<br />

Vahter, P. (2004). The Effect of Foreign Direct Investment on Labour Productivity: Evidence From Estonia and<br />

Slovenia. Tartu University Press.<br />

Vahter, P. (2006). Productivity in Estonian Enterprises: The Role of Innovation and Competition. Bank of Estonia<br />

Working Paper Series, No. 7.<br />

Vahter, P., Masso, J. (2006). Home versus Host Country Effects of FDI: Searching for New Evidence of Productivity<br />

Spillovers. William Davidson Institute. Working Paper No. 820.<br />

Weill, L. (2008). Leverage and Corporate Performance: Does Institutional Environment Matter? Small Business<br />

Economics, 30 (3), 251-265<br />

341


Appendix 1. Summary of previous studies on the impact of leverage on productivity.<br />

Authors Producitivity measure Formula for leverage Sign for leverage<br />

Dimelis and Louri 2002 Labour productivity (Short-term debt+long-term debt)/total assets +<br />

Nucci et al. 2005 TFP Debt/total assets -<br />

Kale et al. 2007 Labour productivity (Book value of long-term debt+short-term debt) /<br />

(book value of debt+market value of equity)<br />

Nunes et al. 2007 Labour productivity Total liabilities/total assets -<br />

non-linear<br />

Weill 2008 Cost efficiency Total liabilities/total assets varies by country<br />

Ghosh 2009 TFP Total debt/total assets -<br />

Greenaway et al. 2009 TFP Total liabilities/total assets -<br />

Coricelli et al. 2010 TFP growth Total debt/total assets non-linear<br />

Appendix 2. Descriptive statistics for multinational and local companies. (Monetary values in thousands of<br />

euros)<br />

Wilcoxon<br />

rank-sum<br />

Mean Median Sd Min Max No of obs test (z)<br />

Total Assets<br />

Equity<br />

Long-term Debt<br />

Short-term<br />

Debt<br />

Sales<br />

Net Profit<br />

Adjusted<br />

Leverage<br />

Long-term<br />

Leverage<br />

No of<br />

employees<br />

Labour<br />

Productivity<br />

Age<br />

Tangibility<br />

Local 1,959 749 5,473 0 174,424 9,282 -45.8***<br />

MNC 5,673 1,933 12,658 1 272,140 9,119<br />

Local 872 267 3,400 0 156,138 9,282 -35.2***<br />

MNC 2,571 633 7,073 0 132,309 9,119<br />

Local 294 17 1,620 0 64,669 9,282 8.6***<br />

MNC 744 0 4,133 0 104,508 9,119<br />

Local 213 20 936 0 22,489 9,282 3.6***<br />

MNC 510 7 1,996 0 79,583 9,119<br />

Local 3,361 1,392 7,354 0 174,582 9,282 -46.7***<br />

MNC 9,248 3,569 17,825 1 401,879 9,119<br />

Local 192 55 797 -7,538 39,981 9,282 -19.8***<br />

MNC 471 123 1,816 -15,657 49,357 9,119<br />

Local 0.33 0.27 0.29 0.00 1.00 9,282 10.8***<br />

MNC 0.30 0.18 0.31 0.00 1.00 9,119<br />

Local 0.11 0.03 0.16 0.00 0.99 9,282 18.0***<br />

MNC 0.08 0.00 0.16 0.00 0.99 9,119<br />

Local 50 27 77 1 1,557 9,282 -13.3***<br />

MNC 86 33 157 1 4,985 9,119<br />

Local 83 39 126 0 1,024 9,282 -39.2***<br />

MNC 152 86 176 0 1,030 9,119<br />

Local 9.0 9.3 4.2 0.1 17.0 9,282 5.7***<br />

MNC 8.6 8.8 4.1 0.1 17.0 9,119<br />

Local 0.33 0.30 0.24 0.00 1.00 9,282 22.4***<br />

MNC 0.26 0.17 0.25 0.00 1.00 9,119<br />

Appendix 3. Number of observations per industry.<br />

342


Agriculture,<br />

forestry,<br />

and fishing<br />

Mining Construction Manufacturing<br />

Wholesale<br />

trade<br />

Retail trade Services Total<br />

Local 29 43 49 48 177 57 48 83<br />

418 29 1,657 2,520 2,450 1,333 875 9,282<br />

5% 0% 18% 27% 26% 14% 9% 100%<br />

MNC 99 41 115 78 258 87 89 152<br />

95 110 429 2,965 3,589 612 1,319 9,119<br />

1% 1% 5% 33% 39% 7% 14% 100%<br />

Total 42 41 63 64 225 67 73 117<br />

513 139 2,086 5,485 6,039 1,945 2,194 18,401<br />

3% 1% 11% 30% 33% 11% 12% 100%<br />

Appendix 4. Pairwise correlations between variables.<br />

Labour<br />

Productivity<br />

Labour Productivty 1.00<br />

Adjusted<br />

Leverage<br />

Adjusted Leverage -0.03*** 1.00<br />

Long-term Leverage -0.09*** 0.68*** 1.00<br />

Tangibility -0.32*** 0.31*** 0.36*** 1.00<br />

Long-term<br />

Leverage Tangibility Size Age HHI<br />

Size 0.15*** 0.06*** 0.06*** 0.10*** 1.00<br />

Age -0.06*** -0.10*** -0.05*** 0.09*** 0.09*** 1.00<br />

HHI -0.13*** 0.03*** 0.04*** 0.16*** 0.15*** 0.03*** 1.00<br />

*** denotes significance at 1% level.<br />

343


MULTISTAGE INVESTMENT OPTIONS, TIME-TO-BUILD AND FINANCING CONSTRAINTS<br />

Elettra Agliardi, University of Bologna and Rimini Centre for Economic Analysis, e-mail: elettra.agliardi@unibo.it<br />

Nicos Koussis, Frederick University Cyprus and University of Bologna, e-mail: bus.kn@fit.ac.cy<br />

Abstract. A dynamic investment options model with “time-to-build”, debt and equity constraints is studied to evaluate the effects on<br />

firm value and leverage choices. It is shown that a firm is more likely to face financing constraints with short term debt. With “timeto-build”<br />

and a tax scheme with full loss offset provision the firm increases initial leverage in order to reduce the impact of delayed<br />

cash flow receipts resulting from “time-to-build”. Under no deductibility for losses, the firm would reduce initial debt significantly.<br />

The joint impact of “time-to-build” and financing constraints causes a significant decrease in firm values.<br />

Keywords: investment options, optimal capital structure; time-to-build; financing constraints; binomial lattice models; real options.<br />

JEL classification: G3;G32;G33; G1<br />

1 Introduction<br />

Recent theoretical developments in corporate finance building on Leland (1994) have provided a unified framework<br />

for the analysis of investment and financing decisions of the firm (see, e.g. Mauer and Sarkar, 2005; Sundaresan and<br />

Wang, 2007; Hackbarth and Mauer, 2010). These papers do not deal with the effects of equity and debt financing<br />

constraints, which are considered in empirical literature extensively (see, f.e., , Rauh, 2006, Hubbard et al., 1995,<br />

Whited and Wu 2006, Titman et al. 2004). In this paper we develop a comprehensive real option model using a<br />

numerical binomial tree approach that extends Broadie and Kaya (2007), allowing for optimal capital structure,<br />

multiple debt issues with different priority rules, multiple investment option stages, different tax schemes and equity<br />

and debt financing constraints. Moreover, we analyze how time-to-build affects leverage choices and how the joint<br />

presence of time-to-build and financing constraints would affect firm values and leverage choices over time. To our<br />

knowledge, this problem and this framework have not been tackled within the literature so far.<br />

There are very few theoretical studies considering the impact of time-to-build on the valuation of an investment<br />

project. Theoretical work using a real option approach with time-to-build has focused on the case without optimal<br />

capital structure (see Majd and Pindyck, 1987, Bar-Ilan and Strange, 1996 and 1998). In Majd and Pindyck (1987)<br />

there is a maximum rate at which construction proceeds, so that it takes time before the project is completed and<br />

begins to generate revenue. Investment proceeds continuously until the project is completed, although construction<br />

can be stopped and later restarted without a cost. In contrast, in our paper investment decisions are made discretely,<br />

rather than continuously, the investment option comes to the end of its useful life, instead of being infinitely lived,<br />

and optimal capital structure and financial constraints are introduced. Koussis et al. (2007) analyze a similar case<br />

called “time-to-learn” where the firms learn new information about the project with a time lag. An interesting result<br />

which emerges from the above mentioned papers is that the usual relationship between volatility and the opportunity<br />

cost and the timing of investment may be reversed in the presence of time-to-build, i.e., an increase in volatility or a<br />

decrease in the opportunity cost may accelerate investment. This result holds because an increase in time-to-build<br />

causes a reduction in the value (moneyness) of the option, so that an increase in volatility or a decrease in the<br />

opportunity cost may sufficiently increase the value of the project triggering earlier investment. Given the<br />

prevalence of both time-to-build and financing constraints in different industries, we also investigate their joint<br />

impact on firm values.<br />

Our results show that with short term debt the firm optimally sets coupon levels to be high, often exceeding<br />

revenue levels at the time of debt issue. In the presence of non-negative net worth (equity) constraints an increase in<br />

volatility hurts firm value by decreasing both the value of unlevered assets and the tax benefits of debt. This<br />

decrease in the tax benefits of debt is caused by the decrease in the use of debt in early stages, although subsequent<br />

leverage may increase. With debt financing constraints higher volatility increases firm value for out-of-the money<br />

options by increasing the option value component of firm value (since the firm cannot exploit high tax benefits of<br />

debt). For in-the-money options an increase in volatility has little impact on firm values. Our analysis further<br />

explores how different tax regimes may have an impact on firm values and leverage choice. Most trade-off models<br />

of the capital structure assume a full loss offset taxation scheme. Alternative tax schemes include the asymmetric<br />

344


tax-scheme, where the tax benefits of debt are lower when the firm incurs losses, or a no loss offset scheme (e.g.,<br />

Tserlukevich, 2008, Agliardi and Agliardi, 2009). In our paper we compare the effects of a full loss offset tax regime<br />

with the no loss offset case. We show that in the latter firms behave considerably more conservatively in the use of<br />

debt for short term maturity, with a substantial firm value reduction compared to the full offset scheme. Sensitivity<br />

analysis results show in general similar directional effects between the two tax schemes (both in the presence and<br />

absence of financing constraints). The assumptions concerning the tax regime becomes particularly important in the<br />

case of time-to-build. If full loss offset provisions are assumed, we find that firms will issue debt prior to<br />

completion of the project to benefit from the resulting tax benefits and mitigate the time-to-build constraints. With<br />

no loss offset the use of initial debt is substantially reduced while subsequent debt value is higher, at least for short<br />

term time-to-build constraints. Our results with time-to-build and full loss offset reveal that debt levels increase to<br />

alleviate the impact of delayed cash flow receipt due to time-to-build. With low volatility the impact of time-to-build<br />

on firm value is reduced since the firm can borrow more heavily in order to alleviate the impact of time-to-build. A<br />

similar effect holds for low opportunity cost-competitive erosion. Equity financing constraints have an important<br />

effect on firm values in the presence of time-to-build (decreasing it by 31% for 5 year time-to-build horizons and by<br />

43% to 10 year time-to-build horizon). For low volatility and high opportunity cost the impact of equity financing<br />

constraints with time-to-build is more significant. Debt financing constraints impact on firm values is highly<br />

significant reaching 42% for plausible parameter values. At low opportunity cost debt financing constraints<br />

significantly reduce bankruptcy risk (because of the high debt capacity in the unconstrained case) and the firm<br />

balances the leverage levels between the initial and the subsequent debt issue. With a no loss offset tax scheme<br />

initial debt levels are reduced significantly for short to medium term horizons with initial leverage increasing<br />

slightly only for long time-to-build horizons. Under this tax scheme we similarly observe that at lower volatility or<br />

lower opportunity cost firm value is increased for all time-to-build horizons. The percentage decrease in firm value<br />

due to time-to-build remains less severe with lower volatility or lower opportunity cost like in the full offset case.<br />

However, due to the lower tax benefits under the no loss offset scheme the differences between high and low<br />

volatility and high and low opportunity cost are lower under all time-to-build horizons.<br />

Overall, we demonstrate the flexibility of the lattice method to incorporate several features that are embedded in<br />

many investment decisions characterized by lengthy construction periods and financing constraints.<br />

2 The Model<br />

Consider a firm whose revenues follow a geometric Brownian motion of the form dP � P(<br />

adt � �dZ)<br />

where α ,<br />

σ > 0 are constant parameters, r is the risk-free interest rate and dZ is the increment of a standard Wiener process.<br />

The firm pays an operational cost C so that total earnings before interest and taxes is P-C. It can decide whether to<br />

invest at time 1 T by paying an irreversible fixed cost I1 and choosing a mix of debt 1( ) P D and equity ) ( 1 P E =<br />

I1 � D1(<br />

P)<br />

to finance the investment cost. After the first investment stage, subsequent investment stages may<br />

follow with maturities T i , i ,... S 3 , 2 � relative to the prior stage (so that actual time for option i is the<br />

accumulated time i T T T 1 � 2 � ... ). At each investment stage the firm may decide to issue new debt and rebalance<br />

its capital structure. Tax-deductible coupon payments Ri are due at each period i and the principal debt (face<br />

value) Fi is paid at maturity. Debt maturity for each debt issue is specified by TDi withTD1 � TF<br />

,<br />

TD2 � TF<br />

� ( T1<br />

� T2<br />

) , TD3 � TF<br />

� ( T1<br />

� T2<br />

� T3<br />

) etc. where TF denotes the firm life. In order to<br />

accommodate the choice of different coupon levels at each investment stage we employ forward-backward<br />

algorithm. The algorithm first creates the pre-investment stage tree with N1 steps. At each revenue level at the end<br />

nodes of the first investment stage, several lattices are created that capture the next operational phase and default<br />

decisions conditional on the choice of the coupon level. Coupon levels depend on revenues P at each state which is<br />

discretized through the choice of C n points and the use of a maximum of cmax points. This implies a coupon grid<br />

{ 0,<br />

1 2 cmax<br />

� P,<br />

� P,....<br />

P}<br />

. Investment stages are approximated by lattices with sizes that are defined relative<br />

n n<br />

nc c<br />

c<br />

345


to the tree used for the pre-investment stage. Denoting the pre-investment stage with N1 the size of the i<br />

subsequent interval ( i � 2,<br />

3,...<br />

) becomes<br />

� Ti<br />

�<br />

N i �<br />

�<br />

� � N1<br />

T �<br />

� . The last period (after<br />

� 1 �<br />

T S ) is approximated by<br />

T ( T T ... T ) T<br />

F � �<br />

� � � � � � �<br />

�<br />

� �<br />

N � �<br />

�<br />

F 1 2<br />

T1<br />

S �1<br />

S<br />

� � N1<br />

� � N1<br />

.<br />

� � T1<br />

�<br />

A standard formulation of the lattice parameters for the up and down jumps and the up and down probabilities (see<br />

Cox, Ross and Rubinstein, 1979) requires that<br />

�dt<br />

u � e ,<br />

dt 1<br />

d � e �<br />

��<br />

, pu<br />

( r�<br />

� ) dt<br />

e � d<br />

�<br />

, pd � 1 � pu<br />

,<br />

where<br />

u u � d<br />

TF<br />

dt � and δ is an opportunity cost parameter. We keep track of the following information at each node<br />

N<br />

F<br />

of the binomial tree: unlevered assets ( U<br />

V ), tax benefits of debt (TB), bankruptcy costs ( BC ), equity (E), debt<br />

issues ( 1, 2 ,... D<br />

L<br />

D ) and levered firm value ( V ). We denote by b the proportion of the value of the firm that is<br />

lost in case of bankruptcy and assume that debt holders will be reimbursed following an absolute priority rule. It<br />

implies that the debt holders’ claim in case of default is:<br />

u ~<br />

D1t � min�( 1�<br />

b)<br />

Vt<br />

, R1<br />

�t<br />

� D1t<br />

�<br />

(1)<br />

j 1<br />

u<br />

~<br />

D jt � min[( 1�<br />

b)<br />

Vt<br />

� � Dit<br />

, R j�t<br />

� D jt ]<br />

i 1<br />

�<br />

, j = 2,….S<br />

�<br />

where it D~ denotes the expected continuation value for debt issue i in case the firm does not default at t , that is,<br />

~<br />

�rdt<br />

Dit<br />

� ( pu<br />

Dt�dt<br />

, h � ( 1�<br />

pu<br />

) Dt�dt<br />

, l ) e . Cash inflows (revenues) and outflows (costs and interest payments)<br />

as well as decisions occur every time step � t . �t can be controlled by a variable N dec that specifies the number<br />

of decision points within each unit period 1 . At the end of the operational phase<br />

are calculated as follows:<br />

TF equity and the other variables<br />

�<br />

S<br />

S<br />

debt<br />

debt �<br />

ET � max�(<br />

P � C � � Ri<br />

I i )( 1�<br />

� ) �t<br />

� � Fi<br />

I i , 0�<br />

(2)<br />

F<br />

�<br />

i�1<br />

i�1<br />

�<br />

where � >0 is the corporate tax and<br />

debt<br />

Ii is an indicator that takes the value of 1 if debt issue i has not expired<br />

and zero otherwise. If E TF<br />

� 0 , then V P C t<br />

u<br />

TF � ( � )( 1�<br />

� ) � , TBTF<br />

� S<br />

debt �<br />

� � ��<br />

Ri<br />

Ii<br />

��t<br />

� i�1<br />

�<br />

, BC TF<br />

� 0 ,<br />

DiTF � Ri<br />

�t<br />

� Fi<br />

,<br />

L<br />

VTF<br />

� ETF<br />

S<br />

debt<br />

� � Di<br />

Ii<br />

,<br />

i�1<br />

otherwise if ETF<br />

2 � 0 (i.e., bankruptcy occurs), then<br />

V P C t<br />

u<br />

TF � ( � )( 1�<br />

� ) � , TB TF<br />

� 0 , BCTF u L<br />

� bVT<br />

, V<br />

F TF<br />

� ETF<br />

� DT<br />

. Debt values at maturity in the event<br />

F<br />

of default depend on the priority rule. Under the absolute priority rule we get:<br />

D<br />

U �( 1�<br />

b)<br />

V , R �t<br />

� �<br />

D1<br />

TF � min TF<br />

1 F1<br />

(3)<br />

jTF<br />

j 1<br />

u<br />

� min[( 1�<br />

b)<br />

V � � DiT<br />

, R j�t<br />

� Fj<br />

]<br />

F<br />

i 1<br />

�<br />

, j = 2,….S<br />

�<br />

1 Thus, Δt = 1/Ndec. Each Δt interval is approximated by a sub-tree NΔt. To maintain accuracy discounting occurs for the interval dt = Ti/Ni. In<br />

principle, the decisions can be made as dense as possible approximating the continuous decision limit when Ndec tends to infinity.<br />

2 If the value of unleveled assets turns negative then the value of all variables are set to zero.<br />

346


Prior to the maturity of the operational phase (and after all investments have taken place) the values of each of these<br />

variables are calculated as follows:<br />

�<br />

S<br />

debt ~ �<br />

Et � max�(<br />

P � C � � Ri<br />

I i )( 1�<br />

� ) �t<br />

� Et<br />

, 0�<br />

�<br />

i�1<br />

�<br />

u<br />

~<br />

~<br />

S<br />

u<br />

� debt �<br />

If E t � 0 , then Vt<br />

� ( P � C)(<br />

1�<br />

� ) �t<br />

�Vt<br />

, BCt<br />

� 0 � BCt<br />

, TBt<br />

� � ��<br />

Ri<br />

I i ��t<br />

+ TB t<br />

� i�1<br />

�<br />

~<br />

,<br />

~<br />

S<br />

L<br />

debt<br />

u<br />

~ u<br />

D it � Ri�t<br />

� Di,<br />

t , Vt<br />

� Et<br />

� � Dit<br />

Ii<br />

, whereas, if E t � 0 , then Vt<br />

� ( P � C)(<br />

1�<br />

� ) �t<br />

� Vt<br />

,<br />

i�1<br />

S<br />

u<br />

L<br />

debt<br />

BCt � bVt<br />

and TB t � 0 , Vt<br />

� Et<br />

� � Dit<br />

Ii<br />

, where ~<br />

xt<br />

denotes the expected discounted value of<br />

i�1<br />

rdt<br />

variable x and equals<br />

~<br />

�<br />

xt<br />

� ( pu<br />

xt<br />

�dt<br />

, h � ( 1�<br />

pu<br />

) xt<br />

�dt<br />

, l ) e .<br />

For points within the lattice not involving a decision to default or not, which are used for increased accuracy, the<br />

values of each variable are the discounted expected values of the variables of the following period. At the maturity<br />

of each investment option stage i occurring at time t, where t takes values according to the specified investment<br />

maturities, the levered firm value includes the equity value plus the amount of debt received at i plus the expected<br />

values of debt raised in the future minus the total cost which includes the investment paid at i stage and the<br />

expected cost to be paid in the future:<br />

S ~<br />

S ~<br />

S<br />

L<br />

u<br />

~<br />

V t � max[ Et<br />

� Dit<br />

� � Dkt<br />

� ( Ii<br />

� � I k ), 0]<br />

� max[ Vt<br />

� TBt<br />

� BCt<br />

� ( Ii<br />

� � I k ), 0]<br />

(4)<br />

k �i�1<br />

k�i<br />

�1<br />

k �i�1<br />

Bankruptcy in periods between investment stages (and prior to the final investment) is triggered when the earnings<br />

net of cost and coupon payments plus the expected levered firm value (which includes expected equity value,<br />

expected cash received by debt issues and expected costs to be paid) are negative. Thus, the bankruptcy condition<br />

for any time t prior to the last investment stage is:<br />

�<br />

I<br />

L<br />

debt ~ L �<br />

Vt � max�(<br />

P � C � � Ri<br />

Ii<br />

)( 1�<br />

� ) �t<br />

� Vt<br />

, 0�<br />

(5)<br />

�<br />

i�1<br />

�<br />

and the values at that stage are calculated as above, depending on whether there is default or not. The values<br />

obtained at the first investment stage are discounted at t = 0. The value of the firm at time zero involves the sum of<br />

the present value of equity, all expected debt issues minus the expected present value of the investment costs. This is<br />

equivalent to the expected present value of the unlevered assets plus the expected present value of the tax benefits<br />

minus the expected present value of bankruptcy and the investment costs.<br />

In what follows we introduce the possibility that a firm may face equity or debt financing constraints or both. In<br />

order to incorporate equity financing constraints we investigate the case where equity holders face constraints<br />

where net worth remains positive at all times, i.e., E t � 0 , that is, we do not allow for negative equity values<br />

between stages, implying that no equity infusion of cash is allowed. Alternative constraints where partial infusion of<br />

cash prior to investment equity could be easily analyzed in this framework. Observe that this condition does not<br />

imply that there is no equity financing since equity holders can finance part or all of the investment cost at the time<br />

investments take place. However, if at any point in time after investment equity value drops to zero default is<br />

triggered without considering additional infusion of cash. This type of constraints may reflect difficulties in raising<br />

new external finance in the case where the firm is not performing well or the inability of current equity holders to<br />

infuse new cash because of personal financing constraints. Debt constraints can be easily incorporated in this<br />

framework, for example, by allowing coupon rates to be only a fraction of the revenue level at which the leverage<br />

decision is made. In the following numerical simulations both equity and debt financing constraints will be<br />

examined in a two stage model.<br />

3 Simulation results: Unconstrained and Constrained Debt and Equity<br />

Table 1 presents sensitivity results with respect to volatility for unconstrained debt where both debt maturity issues are<br />

equal (TD1 = TD2 = 5). The firm optimally selects high coupon levels which exceed current revenue levels. As we shall<br />

347


subsequently see, it implies that debt financing constraints become binding. Observe that equity value (net worth) is<br />

negative in this case. This is because of the high coupon and debt levels used. One can also find that firm values are<br />

higher than what can be obtained in a longer debt maturity case, for all levels of volatility and for different levels of<br />

moneyness (see also Agliardi and Koussis, 2010, for a single debt issue). Equity values are increasing in volatility for<br />

both low and high revenue level. Debt values are strictly decreasing and thus total leverage (LevT) is decreasing in<br />

volatility. With short term debt equity holders want to accelerate the receipt of tax benefits of debt. Since the initial debt<br />

expires before the exercise of the second stage investment there are no remaining net benefits, the firm is able to borrow<br />

more heavily initially and thus we register an option effect related to unlevered assets.<br />

Table 2 shows sensitivity results with respect to volatility when the firm faces equity financing constraints. Firm<br />

values decrease as a function of volatility because the debt levels are more significantly decreased than the equity<br />

increase. Observe that for both the out-of-the-money and the in-the-money case an increase in volatility reduces firm<br />

values by reducing the value of unlevered assets and the tax benefits of debt. Under positive net worth constraints firms<br />

issue less debt initially in order to avoid distress and behave conservatively. This is consistent with empirical studies<br />

(Graham, 2000 and DeAngelo et al., 2010) providing a possible explanation of growth firms borrowing conservatively.<br />

In Table 3 the results under debt financing constraints are examined. Under our parameters, debt constraints for both<br />

the out-of-the-money and the in-the-money case are binding, resulting in the maximum allowable coupon being used. In<br />

the out-of-the-money case an increase in volatility causes an increase in firm value which is caused by an increase in the<br />

option value of unlevered assets (value of unlevered assets net of expected costs). In this case a higher volatility helps<br />

alleviate the impact of financing constraints which reduce the tax benefits of debt. In order to evaluate the amount of<br />

leverage we calculate three measures. All measures use the time 0 values of equity and debt (despite the fact that some<br />

debt may be issued at a future date). The first measure (Lev1) includes only the first debt issue over the total value of<br />

equity plus all debt. The second measure (Lev2) includes the proportion of the second debt issue over the value of equity<br />

plus all debt issues. The third measure is total leverage over the total value of equity plus debt (LevT). We observe that<br />

both the initial and subsequent debt are decreased but lev1 and lev2 remain rather flat with lev1 being higher than lev2<br />

(like in the unconstrained case). In the in-the-money case an increase in volatility has little impact on firm value when the<br />

firm faces debt financing constraints. This occurs because both the value of unlevered assets and the net benefits of debt<br />

remain flat. In comparison with the unconstrained case the percentage drop in value is more substantial for the out-ofthe-money<br />

case. For in-the-money the absolute value drop is significant while the percentage drop is less important than<br />

out-of-the-money case. In the unconstrained case we observe that lev1 is much higher than lev2. While this remains in the<br />

constrained debt case this gap is now reduced.<br />

Finally, we can compare the impact of two alternative tax regimes on firm value and leverage. From the comparison<br />

between a full loss offset and a no loss offset scheme both in the out-of-the-money case and in-the-money case, we can<br />

see that the difference in firm value under the two regimes is substantial, with firm values even halved in some<br />

occasions. Coupon levels under the full loss offset case are always higher than the corresponding cases of no loss<br />

offset. This is shown also in the large tax benefit differences in the two schemes with the full loss offset exhibiting<br />

much higher values (while bankruptcy costs are of the same order of magnitude). This difference exists despite the<br />

large advantage obtained under the no loss offset case, because of no taxes paid in the loss region.<br />

4 Time-to-build<br />

Let us now investigate the impact of time-to-build on firm values in the presence and in the absence of<br />

financing constraints. First, we consider without financing constraints the financing choices of the firm facing timeto-build<br />

restrictions. It can be shown that the results critically depend on the tax scheme. Our second goal is to<br />

investigate the importance of financing constraints when the firm faces time-to-build restrictions and analyze the<br />

adjustments in the financing policy.<br />

Table 4 analyzes the case without financing constraints with sensitivity with respect to volatility and the<br />

opportunity cost-competitive erosion. With time-to-build the firm receives no cash flows until the project is<br />

completed, i.e., cash flows initiate after the second investment is implemented, and the useful life of the project is<br />

reduced by the time-to-build horizon 3 . In the base case, we observe that firm value, the value of unlevered assets and<br />

3 Had we assumed the operational period remains fixed we will simply capturing an option to delay the exercise of the investment option.<br />

348


the net benefits of debt are reduced with time-to-build. Interestingly, debt 1 and debt 2 increase with time-to-build<br />

and are only decreased in very long time-to-build horizons. Lev1, Lev2 and LevT reflect this pattern of debt values<br />

since they are increasing as time-to-build increases and then remain flat for long time-to-build horizons. Equity<br />

values are decreasing with time-to-build reflecting the increase in the debt due (except for very long time-to-build<br />

horizons where debt due is decreased).<br />

Investigating the case of low volatility reveals some interesting insights. Since we are using short-term debt an<br />

increase in volatility reduces firm value. Based on standard real options model (e.g., Majd and Pindyck, 1987 and<br />

Bar-Ilan and Strange, 1996 and 1998) with time-to-build one could expect that higher volatility may have been<br />

beneficial in order to increase the option value of completing the project successfully. The results, however, show<br />

that with low volatility the firm borrows more heavily to alleviate the impact of time-to-build. This is reflected in the<br />

large differentials between the low and high volatility case with respect to the tax benefits of debt. The ability of the<br />

firm to raise more debt under low volatility reduces the percentage decrease in firm values at higher time-to-build<br />

horizons (compared to the high volatility case). Thus in the case of time-to-build with optimal capital structure a low<br />

volatility may be preferred in contrast to the traditional result without debt financing where a high volatility would<br />

be preferable. A similar effect exists for the case of low δ. A low δ reduces the impact of time-to-build on firm<br />

values since the debt capacity is higher and the increase in the tax benefits is more significant.<br />

Table 5 investigates the impact of positive net worth equity constraints in the presence of time-to-build. The<br />

results show that positive net worth equity constraints can cause severe reductions in firm values in the presence of<br />

time-to-build. For the base case the decrease in firm value reaches 31% for 5 year time-to-build horizon and 43% for<br />

10 years of time-to-build (when comparing the corresponding cases of no constraints and equity constraints). For<br />

lower volatility the decrease in values due to positive net worth equity constraints is more severe. The reason is that<br />

at low σ the unconstrained firm would optimally choose to borrow more at t = 0 but that would increase the<br />

probability of negative equity worth (in those states that revenues would not suffice to cover coupon payments).<br />

Therefore, positive net worth equity constraints indirectly impose constraints on the optimal amount of leverage<br />

causing severe reductions in firm values. Interestingly, the firm in this case will optimally shifts emphasis on<br />

obtaining higher leverage in subsequent debt issues. For lower opportunity cost δ the percentage reduction in firm<br />

values due to positive net worth equity constraints are less important (but still significant). Indeed, with lower δ the<br />

firm can retain positive equity values at the unconstrained levels more easily (despite borrowing more heavily). With<br />

lower δ and for short time-to-build the firm can retain positive equity values even using high initial debt levels.<br />

However, as the time-to-build becomes longer the subsequent leverage level becomes more important.<br />

Table 6 analyzes the case with debt financing constraints. Debt financing constraints cause a decrease in value<br />

of about 15-16% even for the case without time-to-build. Time-to-build makes the impact of debt financing<br />

constraints more significant reaching firm reductions of 33% for time-to-build of 5 years for the base case, 42%<br />

reduction for the low volatility case and 36% for the low opportunity cost case. The results also show that the initial<br />

leverage appears significant than the subsequent leverage level. With debt constraints we observe that the firm value<br />

differences between low and high volatility are not significant. Low coupon levels due to the constraints reduce the<br />

risk of bankruptcy (as indicated by the low bankruptcy costs) even for the high volatility (base case) thus values of<br />

unlevered assets, and tax benefits do not change with different volatility levels. Leverage choices remain very<br />

similar to the higher volatility case with lev1 being more important than lev2. When δ is small bankruptcy is<br />

practically non-existent because of the constrained low levels of coupons used and the high debt capacity if the firm<br />

was not constrained. Debt appears to be equally balanced between the first and the second debt issue.<br />

With no loss offset the results are materially different regarding the use of initial leverage. One can find (these<br />

results are not shown for brevity) that the initial debt levels are drastically reduced and are close to zero for short to<br />

medium term horizons. Initial leverage increases slightly only for long time-to-build horizons. Some similarities<br />

with the full loss offset however still remain. With lower volatility or lower opportunity cost firm values are higher<br />

than the base case for all time-to-build horizons. Furthermore, the percentage decrease in firm value due to time-tobuild<br />

is less severe with lower volatility or lower opportunity cost. However, the reduced impact of time-to-build<br />

under low volatility or lower opportunity cost is diminished because now the tax benefits of low volatility or low<br />

opportunity cost are less important.<br />

349


5 References<br />

Agliardi, Elettra and Rossella Agliardi, 2009, Progressive taxation and corporate liquidation policy: analysis and<br />

policy implications, Journal of Policy Modeling 31, 144-154<br />

Agliardi, Elettra and Nicos Koussis, 2011, Optimal capital structure and investment options in finite horizon.<br />

Finance Research Letters, 8, 28-36<br />

Bar-Ilan, Avner, and William C. Strange. 1996, Investment Lags, American Economic Review 86, 610-622.<br />

Bar-Ilan, Avner, and William C. Strange. 1998, A Model of Sequential Investment, Journal of Economic Dynamics<br />

and Control 22, 437-463.<br />

Broadie, M and Kaya, O. 2007, A Binomial Lattice Method for Pricing Corporate Debt and Modeling Chapter 11<br />

Proceedings, Journal of Financial and Quantitative Analysis, 42, 279-312.<br />

DeAngelo, Harry, Linda DeAngelo, Toni M. Whited. 2010, Capital Structure Dynamics and Transitory Debt,<br />

Journal of Financial Economics, doi: 10.1016/j.jfineco.2010.09.005<br />

Cox, J.C, Ross S.A. and Rubinstein, M. 1979, Option Pricing: a Simplified Approach, Journal of Financial<br />

Economics, 7, 229-263.<br />

Graham, J. R, 2000, How big are the tax benefits of debt? Journal of Finance, 55, 1901-1941<br />

Hackbarth, D and D.C: Mauer, 2010, Optimal priority structure, capital structure and investment, mimeo<br />

Hubbard, Glenn R., Anil K. Kashyap, and Toni M. Whited. 1995, Internal Finance and Firm Value, Journal of<br />

Money, Credit, and Banking 27, 683-701.<br />

Koussis, N., Martzoukos S. and Trigeorgis, L. 2007, Real R&D Options With Time-to-Learn and Learning by<br />

Doing, Annals of Operations Research 151, 23-59.<br />

Leland, H. 1994, Corporate Debt Value, Bond Covenants, and Optimal Capital Structure, Journal of Finance 49,<br />

1213-1252.<br />

Majd, S., and R. Pindyck. 1987, Time to Build, Option Value, and Investment Decisions.” Journal of Financial<br />

Economics 18, 7-27.<br />

Mauer, D.C., and Sarkar S. 2005, Real Options, Agency Conflicts and Optimal Capital Structure, Journal of<br />

Banking and Finance 29, 1405-1428.<br />

Rauh, Joshua D. 2006, Investment and Financing Constraints: Evidence from the Funding of Corporate Pension<br />

Plans, Journal of Finance LXI, 33-71.<br />

Sundaresan, S. and Wang, N. 2007, Dynamic Investment, Capital Structure, and Debt Overhang, Working Paper,<br />

Columbia University.<br />

Titman, Sheridan, Stathis Tompaidis, and Sergey Tsypliakov. 2004, Market Imperfections, Investment Flexibility,<br />

and Default Spreads, Journal of Finance LIX, 165-205.<br />

Tserlukevich, Y, 2008, Can real options explain financing behaviour? Journal of Financial Economics,89, 232-252<br />

Uhrig-Homburg, Marliese. 2005, Cash-Flow Shortage as an Endogenous Bankruptcy Reason, Journal of Banking and<br />

Finance 29, 1509-1534.<br />

Whited, Toni M., and Guojun Wu. 2006, Financial Constraints Risk, Review of Financial Studies 19, 531-559.<br />

Table 1. Short term debt with no constraints: Sensitivity to volatility<br />

Panel A: Full tax offset<br />

Out-of-the-money case (P = 10)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 42.776 79.275 48.971 2.497 -16.666 82.289 60.126 50 32.973 16.5 0.65 0.48 1.13<br />

σ = 0.20 34.600 74.358 39.691 5.624 -10.602 71.392 47.635 50 23.826 16 0.66 0.44 1.10<br />

σ = 0.30 31.324 71.545 34.110 7.928 -7.660 69.885 35.501 50 16.402 18 0.72 0.36 1.08<br />

σ = 0.40 31.170 70.621 31.914 7.883 -4.414 66.008 33.057 50 13.481 18 0.70 0.35 1.05<br />

In-the-money case (P=30)<br />

350


Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 327.563 239.867 174.825 0.119 -85.046 311.372 188.246 50 37.009 60 0.75 0.45 1.21<br />

σ = 0.20 305.948 239.331 158.112 8.114 -70.534 290.180 169.683 50 33.382 58.5 0.75 0.44 1.18<br />

σ = 0.30 281.229 235.359 137.561 15.097 -50.305 270.946 137.182 50 26.594 60 0.76 0.38 1.14<br />

σ = 0.40 261.456 232.054 120.325 18.968 -29.342 239.926 122.827 50 21.954 57 0.72 0.37 1.09<br />

Panel B: No loss tax offset<br />

Out-of-the-money case (P = 10)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 20.880 90.616 9.121 1.234 6.441 51.235 46.893 50 33.690 10 0.49 0.45 0.94<br />

σ = 0.20 18.648 80.113 7.484 3.662 10.589 49.026 32.577 50 23.544 10.5 0.53 0.35 0.89<br />

σ = 0.30 18.081 74.959 11.589 3.266 15.942 41.811 28.664 50 18.337 9.5 0.48 0.33 0.82<br />

σ = 0.40 18.789 72.575 7.024 4.377 16.944 43.935 21.503 50 13.594 11 0.53 0.26 0.79<br />

In-the-money case (P=30)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 239.558 239.867 3.207 0.068 -33.153 210.205 149.519 50 37.013 40.5 0.64 0.46 1.10<br />

σ = 0.20 232.003 239.331 18.424 2.673 10.398 183.872 122.748 50 35.015 36 0.58 0.39 0.97<br />

σ = 0.30 220.276 236.941 24.103 6.657 44.030 147.851 109.133 50 30.737 30 0.49 0.36 0.85<br />

σ = 0.40 210.889 232.054 37.569 6.652 73.111 133.044 80.370 50 25.637 28.5 0.46 0.28 0.74<br />

Notes: The model with no equity or debt constraints is used. Base case parameters are: P =10 (out-of-money) or P = 30 (in-the-money), C = 0,<br />

risk-free rate r = 0.06, volatility σ = 0.2, competitive erosion δ = 0.06, investment cost I1 = 50, I2 = 50, b = 0.5, tax rate τ = 0.35 and T1 = 0, T2 = 5<br />

(time of second option relative to the first), TF = 20 and debt maturity TD1 = 5 and TD2 = 5 assuming zero principal. An optimal coupon is chosen<br />

based on nc =20 discretization points for each price level with maximum coupon level points being equal to the price levels (cmax = 40). In all<br />

tables Ndec = 1 (yearly decisions) with NΔt = 24 steps per year.<br />

Table 2. Short term debt with equity financing constraints: Sensitivity to volatility<br />

Panel A: Full tax offset<br />

Out-of-the-money case (P = 10)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 31.037 79.813 37.319 1.766 6.973 41.641 66.751 50 34.328 8 0.36 0.58 0.94<br />

σ = 0.20 26.521 75.918 31.386 4.739 8.151 43.397 51.017 50 26.044 9 0.42 0.50 0.92<br />

σ = 0.30 25.687 73.651 28.337 3.745 13.534 41.104 43.604 50 22.556 9 0.42 0.44 0.86<br />

σ = 0.40 24.783 70.621 25.231 5.394 12.975 41.002 36.481 50 15.675 10 0.45 0.40 0.86<br />

In-the-money case (P=30)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 262.944 239.867 111.901 3.165 25.721 117.089 205.792 50 35.659 22.5 0.34 0.59 0.93<br />

σ = 0.20 246.437 239.845 97.930 7.525 42.926 108.555 178.769 50 33.813 21 0.33 0.54 0.87<br />

σ = 0.30 238.636 238.742 90.632 10.115 50.195 119.431 149.634 50 30.624 24 0.37 0.47 0.84<br />

σ = 0.40 227.516 236.392 81.142 12.759 60.181 107.942 136.652 50 27.259 22.5 0.35 0.45 0.80<br />

Panel B: No loss tax offset<br />

Out-of-the-money case (P = 10)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 20.344 90.616 5.580 3.031 6.533 50.890 44.813 50 31.893 10 0.50 0.44 0.94<br />

σ = 0.20 18.418 80.113 7.254 3.662 12.527 46.858 32.577 50 23.544 10 0.51 0.35 0.86<br />

σ = 0.30 17.694 74.959 11.173 2.916 17.587 39.806 29.040 50 18.740 9 0.46 0.34 0.80<br />

σ = 0.40 18.594 72.537 6.868 4.377 18.579 42.104 21.505 50 13.594 10.5 0.51 0.26 0.77<br />

In-the-money case (P=30)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 232.521 278.015 35.441 2.410 31.301 140.250 146.878 50 35.909 27 0.44 0.46 0.90<br />

σ = 0.20 225.487 266.640 39.048 5.579 50.751 138.169 120.298 50 33.732 27 0.45 0.39 0.84<br />

σ = 0.30 214.369 260.385 37.767 7.416 72.379 113.045 110.525 50 31.581 22.5 0.38 0.37 0.76<br />

σ = 0.40 208.050 251.882 34.250 12.701 75.449 125.344 81.928 50 24.670 27 0.44 0.29 0.73<br />

Notes: The model with equity constraints (debt unconstrained) is used. Base case parameters are: P =10 (out-of-money) or P = 30 (in-the-money),<br />

C = 0, risk-free rate r = 0.06, volatility σ = 0.2, competitive erosion δ = 0.06, investment cost I1 = 50, I2 = 50, b = 0.5, tax rate τ = 0.35 and T1 = 0,<br />

T2 = 5 (time of second option relative to the first), TF = 20 and debt maturity TD1 = 5 and TD2 = 5 assuming zero principal. An optimal coupon is<br />

chosen based on nc =20 discretization points for each price level with maximum coupon level points being equal to the price levels (cmax = 40). In<br />

all tables Ndec = 1 (yearly decisions) with NΔt = 24 steps per year.<br />

Table 3. Short term debt with debt financing constraints: Sensitivity to volatility<br />

Panel A: Full tax offset<br />

Out-of-the-money case (P = 10)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 16.445 78.310 23.245 0.055 35.030 38.588 27.882 50 35.055 7.5 0.38 0.27 0.65<br />

σ = 0.20 16.565 72.407 21.097 0.540 32.149 36.517 24.299 50 26.400 7.5 0.39 0.26 0.65<br />

σ = 0.30 18.571 71.545 20.081 1.009 32.234 34.908 23.476 50 22.046 7.5 0.39 0.26 0.64<br />

σ = 0.40 20.681 70.621 19.077 1.247 32.700 33.175 22.577 50 17.770 7.5 0.38 0.26 0.63<br />

In-the-money case (P=30)<br />

351


Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 223.994 239.867 71.169 0.000 107.696 116.807 86.532 50 37.041 22.5 0.38 0.28 0.65<br />

σ = 0.20 223.946 239.779 71.110 0.024 107.669 116.736 86.461 50 36.919 22.5 0.38 0.28 0.65<br />

σ = 0.30 222.648 238.024 69.682 0.628 107.359 114.875 84.844 50 34.430 22.5 0.37 0.28 0.65<br />

σ = 0.40 220.579 236.392 66.866 2.025 108.161 111.162 81.910 50 30.654 22.5 0.37 0.27 0.64<br />

Panel B: No loss tax offset<br />

Out-of-the-money case (P = 10)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 15.154 78.310 18.882 0.139 33.715 38.563 27.816 50 34.940 7.5 0.39 0.28 0.66<br />

σ = 0.20 15.077 72.407 15.082 0.829 30.786 36.301 24.107 50 26.117 7.5 0.40 0.26 0.66<br />

σ = 0.30 15.966 69.030 13.169 1.381 29.578 33.984 21.801 50 19.397 7.5 0.40 0.26 0.65<br />

σ = 0.40 17.634 70.621 11.414 2.648 30.723 32.392 21.267 50 16.748 7.5 0.38 0.25 0.64<br />

In-the-money case (P=30)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

σ = 0.10 219.912 239.867 57.481 0.000 103.614 116.807 86.532 50 37.041 22.5 0.38 0.28 0.66<br />

σ = 0.20 218.014 239.701 48.337 0.146 101.824 116.625 86.297 50 36.733 22.5 0.38 0.28 0.67<br />

σ = 0.30 213.319 236.941 42.946 1.555 99.504 113.854 83.168 50 33.208 22.5 0.38 0.28 0.66<br />

σ = 0.40 207.898 234.578 36.599 4.438 99.439 108.880 77.966 50 28.386 22.5 0.38 0.27 0.65<br />

Notes: The model with debt constraints (without equity constraints) is used. Base case parameters are: P =10 (out-of-money) or P = 30 (in-themoney)<br />

, C = 0, risk-free rate r = 0.06, volatility σ = 0.2, competitive erosion δ = 0.06, investment cost I1 = 50, I2 = 50, b = 0.5, tax rate τ = 0.35<br />

and T1 = 0, T2 = 5 (time of second option relative to the first), TF = 20 and debt maturity TD1 = 5 and TD2 = 5 assuming zero principal. An optimal<br />

coupon is chosen based on nc =20 discretization points for each price level with maximum coupon level points being equal to the price levels<br />

(cmax = 15) implying that coupons cannot exceed 75% of revenue (P) level at the time of the financing decision. In all tables Ndec = 1 (yearly<br />

decisions) with NΔt = 24 steps per year.<br />

Table 4. Time-to-build without financing constraints: Sensitivity to model parameters<br />

Panel A: Base case<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 72.624 79.956 33.902 1.234 14.527 49.665 48.432 20 20.000 9.75 0.44 0.43 0.87<br />

T 2=1 68.256 73.282 36.361 2.632 0.491 57.813 48.708 20 18.756 11.5 0.54 0.46 1.00<br />

T 2=2 66.139 65.781 39.224 2.129 -11.322 69.032 45.166 20 16.737 14 0.67 0.44 1.11<br />

T 2=3 63.855 60.223 40.815 1.798 -19.171 69.411 49.001 20 15.385 14 0.70 0.49 1.19<br />

T 2=4 61.404 55.652 41.946 1.781 -25.811 67.493 54.136 20 14.413 13.5 0.70 0.56 1.27<br />

T 2=5 59.215 50.846 43.985 3.122 -37.085 73.999 54.794 20 12.494 15.5 0.81 0.60 1.40<br />

T 2=6 51.063 46.199 39.547 2.917 -33.080 66.920 48.990 20 11.766 14 0.81 0.59 1.40<br />

T 2=7 43.361 41.622 35.513 2.706 -29.743 59.753 44.418 20 11.067 12.5 0.80 0.60 1.40<br />

T 2=8 34.935 37.323 30.421 2.413 -24.000 50.301 39.030 20 10.397 10.5 0.77 0.60 1.37<br />

T 2=9 28.838 33.188 27.453 2.176 -22.149 45.403 35.211 20 9.627 9.5 0.78 0.60 1.38<br />

T 2=10 21.921 29.367 23.467 1.911 -18.037 38.298 30.661 20 9.001 8 0.75 0.60 1.35<br />

Panel B: Lower volatility (σ = 0.1)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 79.406 79.956 40.118 0.668 4.115 57.980 57.312 20 20.000 11.25 0.49 0.48 0.97<br />

T 2=1 77.070 73.440 43.070 0.611 -7.770 79.899 43.770 20 18.830 15.5 0.69 0.38 1.07<br />

T 2=2 75.262 67.154 46.325 0.602 -20.083 84.985 47.976 20 17.616 16.5 0.75 0.43 1.18<br />

T 2=3 72.927 61.477 48.520 0.876 -30.383 89.384 50.119 20 16.194 17.5 0.82 0.46 1.28<br />

T 2=4 70.811 56.136 50.614 0.580 -39.022 87.111 58.080 20 15.358 17 0.82 0.55 1.37<br />

T 2=5 68.645 51.027 52.587 0.504 -47.642 87.066 63.686 20 14.464 17 0.84 0.62 1.46<br />

T 2=6 61.236 46.211 49.528 0.878 -47.526 79.386 63.002 20 13.625 15.5 0.84 0.66 1.50<br />

T 2=7 53.048 41.677 44.997 0.796 -43.482 71.698 57.660 20 12.829 14 0.83 0.67 1.51<br />

T 2=8 44.689 37.406 40.077 0.712 -38.448 64.018 51.200 20 12.082 12.5 0.83 0.67 1.50<br />

T 2=9 35.833 33.384 34.460 0.633 -31.879 53.793 45.296 20 11.378 10.5 0.80 0.67 1.47<br />

T 2=10 28.235 29.596 29.913 0.558 -27.072 46.118 39.905 20 10.715 9 0.78 0.68 1.46<br />

Panel C: Lower opportunity cost (δ = 0.02)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 120.073 112.578 50.614 3.120 12.340 75.426 72.306 20 20.000 15 0.47 0.45 0.92<br />

T 2=1 118.984 105.944 54.906 3.075 -2.173 93.384 66.564 20 18.791 18.5 0.59 0.42 1.01<br />

T 2=2 118.172 98.573 59.218 2.453 -16.310 100.722 70.926 20 17.165 20 0.65 0.46 1.10<br />

T 2=3 118.326 93.142 63.495 1.856 -28.489 100.222 83.049 20 16.455 19.5 0.65 0.54 1.18<br />

T 2=4 116.906 87.102 66.529 1.291 -39.035 103.007 88.368 20 15.434 20 0.68 0.58 1.26<br />

T 2=5 113.603 81.313 67.802 0.939 -46.483 103.140 91.518 20 14.573 20 0.70 0.62 1.31<br />

T 2=6 106.761 75.458 66.179 1.429 -50.303 101.990 88.521 20 13.447 20 0.73 0.63 1.36<br />

T 2=7 98.139 69.691 62.422 1.314 -48.864 99.401 80.262 20 12.661 19.5 0.76 0.61 1.37<br />

T 2=8 89.146 64.039 59.925 3.666 -54.582 98.335 76.544 20 11.151 20 0.82 0.64 1.45<br />

T 2=9 78.908 58.491 54.242 3.328 -48.901 90.935 67.369 20 10.496 18.5 0.83 0.62 1.45<br />

T 2=10 68.655 53.058 48.475 2.999 -42.966 81.130 60.370 20 9.879 16.5 0.82 0.61 1.44<br />

352


Notes: The model with no equity or debt constraints is used. Base case parameters are: P =10 , C = 0, risk-free rate r = 0.06, volatility σ = 0.2,<br />

competitive erosion δ = 0.06, investment cost I1 = 20, I2 = 20, b = 0.5, tax rate τ = 0.35 and T1 = 0,. T2 ranges from 0 (no-time-to-build) to timeto-build<br />

of 10 years. When time-to-build exists the firm foregoes all cash flows until full completion of the project. Useful life after the<br />

investment completion is constant at TF = 20. Debt maturity TD1 = 5 and TD2 = 5 assuming zero principal. An optimal coupon is chosen based on<br />

nc =20 discretization points for each price level with maximum coupon level points being equal to the price levels (cmax = 40). In all tables Ndec =<br />

1 (yearly decisions) with NΔt = 12 steps per year.<br />

Table 5. Time-to-build with equity financing constraints: Sensitivity to model parameters<br />

Panel A: Base case<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 72.624 79.956 33.902 1.234 14.527 49.665 48.432 20 20.000 9.75 0.44 0.43 0.87<br />

T 2=1 68.256 73.282 36.361 2.632 0.491 57.813 48.708 20 18.756 11.5 0.54 0.46 1.00<br />

T 2=2 62.727 67.189 34.316 1.512 0.434 43.526 56.034 20 17.266 8.5 0.44 0.56 1.00<br />

T 2=3 55.376 61.549 31.359 1.276 0.758 28.433 62.441 20 16.256 5.5 0.31 0.68 0.99<br />

T 2=4 47.418 56.130 28.085 1.618 0.736 18.254 63.607 20 15.179 3.5 0.22 0.77 0.99<br />

T 2=5 40.682 51.015 25.232 1.274 1.608 13.147 60.217 20 14.290 2.5 0.18 0.80 0.98<br />

T 2=6 34.428 46.199 22.831 1.149 1.499 13.100 53.281 20 13.453 2.5 0.19 0.78 0.98<br />

T 2=7 28.116 41.622 20.259 1.063 1.872 10.496 48.450 20 12.701 2 0.17 0.80 0.97<br />

T 2=8 22.641 37.323 18.157 1.007 1.588 10.478 42.408 20 11.832 2 0.19 0.78 0.97<br />

T 2=9 17.416 33.188 16.126 0.656 1.928 7.806 38.923 20 11.241 1.5 0.16 0.80 0.96<br />

T 2=10 12.490 29.367 14.145 0.794 1.508 7.868 33.342 20 10.227 1.5 0.18 0.78 0.96<br />

Panel B: Lower volatility (σ = 0.1)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 79.406 79.956 40.118 0.668 4.115 57.980 57.312 20 20.000 11.25 0.49 0.48 0.97<br />

T 2=1 72.593 73.456 38.586 0.612 0.572 25.957 84.900 20 18.835 5 0.23 0.76 0.99<br />

T 2=2 64.549 67.334 35.513 0.559 0.263 12.979 89.047 20 17.738 2.5 0.13 0.87 1.00<br />

T 2=3 56.690 61.569 32.348 0.522 0.451 7.787 85.157 20 16.705 1.5 0.08 0.91 1.00<br />

T 2=4 49.101 56.140 29.159 0.466 1.055 5.191 78.587 20 15.733 1 0.06 0.93 0.99<br />

T 2=5 42.417 51.027 26.736 0.584 0.206 5.246 71.727 20 14.762 1 0.07 0.93 1.00<br />

T 2=6 35.909 46.211 24.116 0.464 0.496 5.191 64.175 20 13.953 1 0.07 0.92 0.99<br />

T 2=7 29.435 41.677 21.320 0.421 1.240 2.596 58.740 20 13.141 0.5 0.04 0.94 0.98<br />

T 2=8 23.688 37.406 19.033 0.375 1.308 2.596 52.160 20 12.376 0.5 0.05 0.93 0.98<br />

T 2=9 18.340 33.384 16.943 0.332 1.254 2.596 46.145 20 11.655 0.5 0.05 0.92 0.97<br />

T 2=10 13.364 29.596 15.035 0.291 1.091 2.596 40.653 20 10.976 0.5 0.06 0.92 0.98<br />

Panel C: Lower opportunity cost (δ = 0.02)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 120.073 112.578 50.614 3.120 12.340 75.426 72.306 20 20.000 15 0.47 0.45 0.92<br />

T 2=1 118.132 106.069 53.831 2.936 0.226 78.982 77.756 20 18.832 15.5 0.50 0.50 1.00<br />

T 2=2 110.592 99.704 50.270 1.911 2.522 54.229 91.312 20 17.471 10.5 0.37 0.62 0.98<br />

T 2=3 102.546 93.460 46.774 1.313 3.967 41.419 93.533 20 16.374 8 0.30 0.67 0.97<br />

T 2=4 94.176 87.339 43.251 0.998 5.020 31.161 93.411 20 15.416 6 0.24 0.72 0.96<br />

T 2=5 86.334 81.340 40.642 1.147 3.568 26.018 91.248 20 14.501 5 0.22 0.76 0.97<br />

T 2=6 78.257 75.458 37.508 0.890 4.020 18.222 89.835 20 13.819 3.5 0.16 0.80 0.96<br />

T 2=7 70.461 69.691 34.597 0.815 3.809 18.209 81.456 20 13.013 3.5 0.18 0.79 0.96<br />

T 2=8 60.845 64.039 31.113 2.199 1.859 10.589 80.505 20 12.108 2 0.11 0.87 0.98<br />

T 2=9 53.254 58.491 28.073 1.729 2.897 10.411 71.527 20 11.581 2 0.12 0.84 0.97<br />

T 2=10 46.145 53.058 25.533 1.548 2.544 10.405 64.094 20 10.898 2 0.14 0.83 0.97<br />

Notes: The model with equity constraints and no debt constraints is used. Base case parameters are: P =10 , C = 0, risk-free rate r = 0.06,<br />

volatility σ = 0.2, competitive erosion δ = 0.06, investment cost I1 = 20, I2 = 20, b = 0.5, tax rate τ = 0.35 and T1 = 0,. T2 ranges from 0 (no-timeto-build)<br />

to time-to-build of 10 years. When time-to-build exists the firm foregoes all cash flows until full completion of the project. Useful life<br />

after the investment completion is constant at TF = 20. Debt maturity TD1 = 5 and TD2 = 5 assuming zero principal. An optimal coupon is chosen<br />

based on nc =20 discretization points for each price level with maximum coupon level points being equal to the price levels (cmax = 40). In all<br />

tables Ndec = 1 (yearly decisions) with NΔt = 12 steps per year.<br />

Table 6. Time-to-build with debt financing constraints: Sensitivity to model parameters<br />

Panel A: Base case<br />

353


Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 66.997 79.956 27.140 0.098 29.357 38.869 38.771 20 20.000 7.5 0.36 0.36 0.73<br />

T 2=1 60.943 73.456 26.387 0.065 24.322 38.895 36.561 20 18.835 7.5 0.39 0.37 0.76<br />

T 2=2 55.246 67.302 25.677 0.017 19.583 38.896 34.483 20 17.717 7.5 0.42 0.37 0.79<br />

T 2=3 49.812 61.500 24.963 0.012 15.116 38.864 32.470 20 16.639 7.5 0.45 0.38 0.83<br />

T 2=4 44.654 56.057 24.238 0.065 10.913 38.805 30.511 20 15.575 7.5 0.48 0.38 0.86<br />

T 2=5 39.809 50.948 23.535 0.148 6.945 38.757 28.633 20 14.526 7.5 0.52 0.39 0.91<br />

T 2=6 35.137 46.144 22.788 0.278 3.266 38.590 26.798 20 13.517 7.5 0.56 0.39 0.95<br />

T 2=7 30.505 41.566 21.837 0.579 -0.148 38.168 24.803 20 12.318 7.5 0.61 0.39 1.00<br />

T 2=8 26.180 37.250 20.955 0.808 -3.283 37.730 22.951 20 11.218 7.5 0.66 0.40 1.06<br />

T 2=9 21.076 33.049 19.182 1.617 -5.808 36.158 20.264 20 9.538 7.5 0.71 0.40 1.11<br />

T 2=10 16.693 29.221 17.767 1.568 -6.911 33.501 18.830 20 8.727 7 0.74 0.41 1.15<br />

Panel B: Lower volatility (σ = 0.1)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 67.210 79.956 27.255 0.000 29.339 38.936 38.936 20 20.000 7.5 0.36 0.36 0.73<br />

T 2=1 61.082 73.456 26.461 0.000 24.313 38.936 36.668 20 18.835 7.5 0.39 0.37 0.76<br />

T 2=2 55.310 67.334 25.714 0.000 19.580 38.936 34.533 20 17.738 7.5 0.42 0.37 0.79<br />

T 2=3 49.874 61.569 25.010 0.000 15.122 38.936 32.522 20 16.705 7.5 0.45 0.38 0.83<br />

T 2=4 44.754 56.140 24.347 0.000 10.924 38.936 30.628 20 15.733 7.5 0.48 0.38 0.86<br />

T 2=5 39.933 51.027 23.723 0.000 6.970 38.936 28.844 20 14.816 7.5 0.52 0.39 0.91<br />

T 2=6 35.393 46.211 23.135 0.000 3.247 38.935 27.164 20 13.953 7.5 0.56 0.39 0.95<br />

T 2=7 31.117 41.677 22.581 0.000 -0.260 38.935 25.582 20 13.141 7.5 0.61 0.40 1.00<br />

T 2=8 27.083 37.406 22.052 0.006 -3.561 38.928 24.085 20 12.370 7.5 0.65 0.41 1.06<br />

T 2=9 23.224 33.383 21.497 0.053 -6.646 38.856 22.617 20 11.603 7.5 0.71 0.41 1.12<br />

T 2=10 18.429 29.595 19.821 0.272 -7.760 35.929 20.975 20 10.715 7 0.73 0.43 1.16<br />

Panel C: Lower opportunity cost (δ = 0.02)<br />

Firm Unlevered TB BC Equity Debt 1 Debt 2 Inv1 Inv2 Coupon 1 Lev 1 Lev 2 Lev T<br />

T 2=0 99,832 112,578 27,254 0,000 61,962 38,935 38,935 20 20,000 7,5 0,28 0,28 0,56<br />

T 2=1 94,227 106,078 26,985 0,000 55,963 38,935 38,164 20 18,835 7,5 0,29 0,29 0,58<br />

T 2=2 88,689 99,707 26,720 0,000 50,083 38,935 37,409 20 17,738 7,5 0,31 0,30 0,60<br />

T 2=3 83,216 93,460 26,460 0,000 44,319 38,934 36,667 20 16,704 7,5 0,32 0,31 0,63<br />

T 2=4 77,813 87,337 26,205 0,001 38,669 38,933 35,940 20 15,729 7,5 0,34 0,32 0,66<br />

T 2=5 72,480 81,337 25,956 0,002 33,130 38,933 35,228 20 14,811 7,5 0,36 0,33 0,69<br />

T 2=6 67,215 75,456 25,707 0,007 27,700 38,931 34,525 20 13,940 7,5 0,38 0,34 0,73<br />

T 2=7 62,024 69,688 25,467 0,007 22,378 38,930 33,840 20 13,124 7,5 0,41 0,36 0,76<br />

T 2=8 56,905 64,022 25,230 0,006 17,155 38,929 33,162 20 12,341 7,5 0,44 0,37 0,81<br />

T 2=9 51,851 58,477 24,988 0,014 12,042 38,918 32,491 20 11,600 7,5 0,47 0,39 0,86<br />

T 2=10 46,876 53,015 24,750 0,018 7,014 38,913 31,820 20 10,871 7,5 0,50 0,41 0,91<br />

Notes: The model with debt constraints (without equity constraints) is used. Base case parameters are: P =10 , C = 0, risk-free rate r = 0.06,<br />

volatility σ = 0.2, competitive erosion δ = 0.06, investment cost I1 = 20, I2 = 20, b = 0.5, tax rate τ = 0.35 and T1 = 0,. T2 ranges from 0 (no-timeto-build)<br />

to time-to-build of 10 years. When time-to-build exists the firm foregoes all cash flows until full completion of the project Useful life<br />

after the investment completion is constant at TF = 20. Debt maturity TD1 = 5 and TD2 = 5 assuming zero principal. An optimal coupon is chosen<br />

based on nc =20 discretization points for each price level with maximum coupon level points being equal to 75% of the price levels (cmax = 15) .<br />

In all tables Ndec = 1 (yearly decisions) with NΔt = 12 steps per year.<br />

354


RE-EXAMINING CAPITAL STRUCTURE TESTS: AN EMPIRICAL ANALYSIS IN THE AIRLINE<br />

INDUSTRY<br />

Kruse Sebastian, Kalogeras Nikos, Semeijn Janjaap<br />

School of Business & Economics, Maastricht University, the Netherlands<br />

Email: n.kalogeras@maastrichtuniversity.nl<br />

Abstract. Research has not reached a consensus as to whether Trade-off or Pecking Order theories best explain actual firms’ financing<br />

behavior. Recently, different authors have examined the impact of competitive strategy and ownership structure on capital structure<br />

decisions, providing empirical support for important interactions. Yet, this type of research has not been included in mainstream<br />

finance theories of capital structure. Our central hypothesis is that existing empirical capital structure tests may be biased since they do<br />

not account for the attributes of competitive strategy and ownership type. We recognize that these two attributes should be included in<br />

the existing empirical tests, and hence, we may improve their explanatory power as well as shed more light on the debate among<br />

Trade-off or Pecking Order theories. Specifically, we modify three existing empirical models to include competitive strategy and<br />

ownership structure and test our central hypothesis in the airline industry. Our empirical results show strong support for the impact of<br />

competitive strategy and ownership structure on capital structure choices. Moreoever, our results demonstrate that existing empirical<br />

tests for Trade-off and Pecking Order theory may indeed be biased. Financial managers and policy makers may gain some useful<br />

insights from these results. First, our results indicate that managers, CEOs, and CFOs should account for the strategic focus of firms<br />

over time when making capital structure decisions. Second, managers should weigh carefully the benefits of lower interest rates<br />

against the reduced ability to issue seasoned equity when thinking about state investors. Finally, our results highlight that managers<br />

should consider the high costs of information asymmetries emphasized by Pecking Order theory, as well as the tax benefit versus<br />

bankruptcy cost argument provided by Trade-off theory<br />

Keywords: capital structure choice, competitive strategy, ownership type, airline industry.<br />

JEL classification: G30; G32.<br />

1 Introduction<br />

Fifty-two years after the seminal work of Modigliani and Miller (1958), there is no consensus among financial<br />

scholars concerning how firms determine their capital structure. Two main theories have emerged: Trade-off theory<br />

(Modigliani & Miller, 1958, 1963) and Pecking Order theory (Myers, 1984; Myers & Majluf, 1984). On the one<br />

hand, Trade-off theorists argue that firms weigh costs and benefits of debt and equity and make their leverage<br />

decision accordingly. Consequently, an optimal leverage ratio exists for each firm. On the other hand, Pecking Order<br />

proponents assert that high costs of information asymmetries drive firms’ preferences for internal to external<br />

finance. As a result, firms only use external finance when all internal means are exhausted. Furthermore, firms<br />

always prefer debt to equity, when debt is available. Thus, Pecking Order theory posits that no optimal leverage ratio<br />

exists and firm leverage is the outcome of past financing needs.<br />

Empirical finance literature does not provide conclusive references favoring either Trade-off or Pecking Order<br />

theories in order to determine and predict capital structure choices (Frank & Goyal, 2008). In fact, both theories<br />

have received some empirical support. In empirical analysis, most of the variables were found to be crosssectionally<br />

correlated to a firm’s leverage. This result has implications consistent with Trade-off theories (Rajan &<br />

Zingales, 1995). However, other studies indicated that profitability is negatively correlated to leverage, as Pecking<br />

Order theory predicts (e.g., Frank & Goyal, 2009). Moreover, some authors strongly support that the target<br />

adjustment processes are consistent with Trade-off theories (e.g., Flannery & Rangan, 2006), while others find the<br />

target-adjustment process to be too slow to matter at all (e.g., Fama & French, 2002). Overall, evidence so far is<br />

mixed and often similar tests have yielded different results depending on the sample employed.<br />

Yet, strategy scholars have begun recently to examine the interaction of strategic variables with capital<br />

structure. Specifically, theorists in strategy suggest that competitive strategy and ownership structure are important<br />

determinants of capital structure (e.g., Williamson, 1988; Barton & Gordon, 1988). First, firms that follow a<br />

differentiation strategy usually have lower degrees of collateralizable assets, and in turn, reduce their leverage<br />

capacity. Moreover, a firm’s differentiation strategy may lead to an excess borrowing capacity for strategic<br />

acquisitions or investments. Empirical evidence provides some support that differentiation strategies are indeed<br />

associated with lower levels of leverage (e.g., Balakrishnan & Fox, 1993). Second, state-owned firms usually enjoy<br />

implicit guarantees on their credit, which decreases their cost of debt. That is, state-owned firms tend to rely more<br />

on debt financing compared to firms financed entirely by private investors (Dewenter & Malatesta, 2001). Hence,<br />

355


theory and evidence from strategy scholars favor the idea that competitive strategy and ownership structure have an<br />

important effect on the level of a firm’s leverage.<br />

Building on strategic management advances (e.g., Williamson, 1988; Balakrishnan, & Fox, 1993; O’Brien,<br />

2003), we hypothesize that there is an important interaction between competitive strategy, ownership structure and<br />

the empirical debate contrasting Trade-off and Pecking Order theory. Specifically, we aim to answer the following<br />

research question: Does the omission of competitive strategy and ownership structure bias the results of existing<br />

empirical finance tests designed to distinguish between Trade-off and Pecking Order theories? To the best of our<br />

knowledge, this is the first empirical study that attempts to examine the relationship between competitive strategy,<br />

ownership structure and a potential bias in existing and commonly used capital structure tests.<br />

The intuition behind this rational is two-fold. First, firms that follow a differentiation strategy may have lower<br />

levels of leverage than low-cost firms. Existing tests, such as the financing deficit test (e.g., Shyam-Sunder &<br />

Myers, 1999), do not account for the fact that there may be systematic differences in the level of leverage according<br />

to different firm strategies. However, these systematic differences in leverage may have a large impact on the test<br />

results and may lead to false conclusions. Second, state-owned firms’ behavior may be consistent with Pecking<br />

Order implications since their capital structure formation relies heavily on debt and they are reluctant to issue equity.<br />

However, the behavior of these firms is not caused by information asymmetries, but it results from the ownership<br />

structure of these firms. State-owned firms are able to obtain debt cheaper and may be restricted from issuing equity.<br />

That is, empirical Pecking Order tests might yield wrong conclusions when a selected sample consists of stateowned<br />

firms. Furthermore, empirical tests designed to detect mean reversion behavior regarding leverage levels may<br />

also yield biased results because state-owned firms cannot decrease their leverage by issuing equity easily<br />

(Dewenter & Malatesta, 2001).<br />

We perform three empirical tests that are designed to detect Pecking Order and Trade-off behavior. Specifically,<br />

we build on existing literature and then extend each test to accommodate our new control variables. First, we test<br />

three cross-sectional variables in order to examine the impact on leverage level of both competitive strategy and<br />

ownership structure. Second, Trade-off theory predicts that firms try to achieve a target level of leverage, but firms<br />

may move away from this target over time. Consequently, we measure the speed of the adjustment process to a<br />

leverage target (Shyam-Sunder & Myers, 1999). Third, Pecking Order theory predicts that financing deficits will be<br />

funded with additional debt issues, which is tested using a financing deficit test (Shyam-Sunder & Myers, 1999). In<br />

order to examine the extent to which airlines may follow Pecking order theory, we modify the extended financing<br />

deficit test (Frank & Goyal, 2009) by including our control variables (i.e. competitive strategy and ownership<br />

structure). Our central proposition is that competitive strategy and ownership structure may bias the results of<br />

existing empirical capital structure tests. Our findings indicate that partial state-ownership is associated with more<br />

Trade-off and less Pecking Order behavior, while the relevant theory suggests opposite behavior. Moreover, a lowcost<br />

strategy is associated with less Pecking Order behavior. In the next sections, we discuss in more detail the<br />

formation of our hypothesis and the results of our empirical study.<br />

2 Hypothesis formation<br />

To be comprehensive in our empirical analysis, we use three types of empirical tests. First, we examine the crosssectional<br />

correlations of factors often indicated as being correlated to firms’ leverage (e.g. Rajan & Zingales (1996)).<br />

Second, we focus on the mean reverting behavior of firms, which is a major characteristic of a firm’s Trade-off<br />

behavior (Shyam-Sunder & Myers, 1999). Third, we use the financing deficit methodology (Frank & Goyal, 2003)<br />

to examine the extent to which firms follow Pecking Order behavior. We structure our hypothesis generation along<br />

these three empirical tests.<br />

2.1 Cross-sectional correlation<br />

To examine the cross-sectional correlations, we use the three variables most consistently associated with leverage in<br />

cross-sectional analyses: profitability, size and M/B. Using this cross-sectional analytical framework, we provide a<br />

detailed picture of the financing behavior of the firms in our sample.<br />

It has been theorized (e.g., Titman, 1984; Balakrishnan & Fox, 1993) that firms following a differentiation<br />

strategy will most likely have lower levels of leverage compared to firms following a low-cost strategy. Usually, a<br />

differentiation strategy entails a high level of firm-specific investments, which cannot easily be collateralized.<br />

Furthermore, differentiated firms often maintain excess borrowing capacity for pursuing strategic acquisitions. Lowcost<br />

firms often have a lower degree of firm-specific investment and often to not pursue any strategic acquisitions<br />

(O’Brien, 2003). Hence, differentiated firms have an incentive to keep their leverage lower than low-cost firms. This<br />

356


elationship has been confirmed almost unanimously in the empirical literature (e.g., Titman, 1984; O’Brien, 2003).<br />

We also expect to find a differentiation strategy to be associated with lower leverage compared to a low-cost<br />

strategy in our sample, hence:<br />

H1: Firms that follow a differentiation strategy are less leveraged than firms that follow a low-cost strategy.<br />

Dewenter and Malatesta (2001) argue that state-owned firms have a lower leverage level than privately-owned<br />

firms for at least two reasons. First, state-owned firms are often prohibited from issuing equity, which may dilute the<br />

ownership of the state. Second, state-ownership serves as an implicit state guarantee for the survival of the firm,<br />

inducing lenders to offer debt to the firm at lower rates compared to privately-owned firms. Several researchers<br />

(e.g., Titman and Wessels, 1988) provide empirical support that links state ownership to a higher degree of leverage.<br />

Following Dewenter & Malatesta (2001) closely, we also expect state ownership to be related to a higher degree of<br />

leverage compared to private ownership.<br />

H2: State-owned firms are more leveraged than privately-owned firms.<br />

Trade-off theorists argue that leverage should increase as profitability increases because a firm’s tax shield also<br />

increases in value when profits are higher. However, Pecking Order theory posits that firms with a high profitability<br />

will have lower leverage as they rely on their internal funds for financing. Empirical evidence strongly favors the<br />

Pecking Order proposition (Rajan & Zingales, 1995). Since there is no evidence supporting a positive relationship<br />

between profitability and leverage, we also expect profitability to be negatively related to leverage.<br />

H3: A firm’s profitability is negatively related to its leverage.<br />

Trade-off theories support a positive relationship between firm size and leverage. That is, large-sized firms have<br />

a lower risk of bankruptcy; thereby they may reduce financial distress costs (e.g., Ang, et al. 1982; Warner, 1977).<br />

Pecking Order theory does not provide a clear interpretation on the role of size in its relation to leverage. Moreover,<br />

most empirical studies rather support the Trade-off theory point of view and, therefore, we also expect a positive<br />

relation between leverage and firm size, as follows:.<br />

H4: A firm’s size is positively related to its leverage.<br />

In accordance with Trade-off theory, the market-to-book ratio is usually associated with high growth<br />

opportunities, which increase the costs of financial distress (Kraus & Litzenberger, 1973). Growth opportunities also<br />

increase potential agency conflicts. This happens because the investment decisions of firms are less transparent to<br />

outsiders (Jensen & Meckling, 1976). Finally, growth options are difficult to collateralize, reducing the maximum<br />

debt capacity of firms (e.g., DeAngelo & Masulis, 1980). Hence, the market-to-book ratio as a measure of growth<br />

options is theorized to be negatively related to leverage in Trade-off theories (Frank & Goyal, 2008). In contrast,<br />

Pecking Order theories claim that, under ceteris paribus conditions, higher growth options lead to more financing<br />

needs and therefore to larger debt accumulation over time (Myers, 1984). Empirical evidence, such as Barclay et al.<br />

(2006), usually finds a negative relation of the market-to-book ratio to leverage. Consistent with this empirical<br />

evidence, we also expect:<br />

H5: A firm’s M/B is negatively related to its leverage.<br />

2.2 Target adjustment<br />

All Trade-off theoretical streams, whether they are based on theoretical premises regarding tax benefits vs.<br />

bankruptcy costs (Kraus & Litzenberger, 1973), agency costs (Jensen & Meckling, 1976) or transaction costs<br />

(Fisher, et al. 1989), support that firms should exhibit some sort of target adjustment behavior in their capital<br />

structure decisions. In other words, if a test can prove that firms follow a target adjustment procedure in their capital<br />

structure decisions, then this is usually taken as evidence in favor of Trade-off theories (Frank & Goyal, 2003).<br />

Empirical studies often use moderate target adjustment procedures in diverse samples (e.g., Shyam-Sunder &<br />

Myers, 2001; Jalilvand & Harris, 1984; Hovakimian, et al. 2001). Although we focus on only one industry, we also<br />

expect to see moderate target adjustment behavior:<br />

357


H6: Firms in our sample exhibit moderate target adjustment behavior.<br />

Competitive strategy has been linked theoretically to leverage levels within firms (e.g., Titman, 1984;<br />

Balakrishnan & Fox, 1993). Empirical studies have confirmed these theoretical linkages (e.g., Balakrishnan & Fox,<br />

1993; Jordan, et al. 1998; O’Brien, 2003). To the best of our knowledge, there are no theoretical predictions<br />

regarding the impact of competitive strategy on the adjustment behavior of firms. Thus, we expect to find no<br />

significant differences in target adjustment behavior in our sample according to competitive strategy.<br />

H7: There are no significant differences in target adjustment behavior between low-cost and differentiated firms.<br />

Dewenter and Malatesta (2001) have discussed how a firm’s state ownership is linked to a higher degree of<br />

leverage. One of the reasons for claiming that state-ownership leads to higher leverage may be the reduced ability of<br />

state-owned firms to issue equity in the secondary markets. That is, often state-owned firms are not allowed to issue<br />

additional equity because these equity issuances would dilute the state ownership, which is against the interests of<br />

the firm’s owner (the state). Here, we adopt the premises made by Dewenter & Malatesta (2001) to our decision<br />

context. If state ownership leads to reduced ability to issue equity, then surely state-owned firms cannot revert back<br />

to pre-defined leverage targets. This premise may be illustrated by using the following example: Suppose that a<br />

shock to the equity market decreases the overall valuations of all firms in an economy (e.g., a financial crisis<br />

occurs). In this situation, firms are moved away from their leverage targets because their equity values decreased.<br />

Hence, firms may have an incentive to issue equity to move back to their leverage targets. Private firms in this<br />

scenario, which have no general restrictions on their equity issues simply issue seasoned equity to restore their target<br />

capital structure. State-owned firms, on the other hand, may find themselves in the situation that they would like to<br />

issue equity, but they are restricted from doing so due to the fact that their state owner is not in favor of diluting its<br />

ownership. The state-owned firms in the economy may then find it hard to revert back to their leverage targets. 1 In<br />

an empirical sample including a considerable number of state-owned firms, slow target adjustment behavior would<br />

be observed and the researcher may conclude that Trade-off theories do not explain this finding. Yet, the slow target<br />

adjustment may simply be attributed to the fact that state-owned companies are incapable to revert back to their<br />

targets. In line with this reasoning, we predict that state-owned firms revert back to their targets more slowly than<br />

privately-owned firms. Therefore, we hypothesize that:<br />

H8: State-owned firms revert back to their leverage target more slowly than privately-owned firms.<br />

2.3 Financing deficit<br />

The Pecking Order theory predicts that firms follow a clear hierarchy in terms of financing (Myers, 1984). First,<br />

firms use internal funds. Second, firms rely on financing instruments with a low degree of adverse selection costs,<br />

e.g., debt. Third, firms only use instruments with high adverse selection costs, such as equity, as a last resort when<br />

their full debt capacity is exhausted (Myers & Majluf, 1984). The financing deficit methodology allows the testing<br />

for one of the central propositions of Pecking Order theory, namely the strong reliance on debt as the primary<br />

financing tool to close financing gaps (Shyam-Sunder & Myers, 1999). In essence, this methodology answers a<br />

simple question: Does a firm use debt as its primary tool for financing, given that its internal funds are exhausted?<br />

Frank and Goyal (2003) use a large sample of firms and show that firms finance their deficits only to a very low<br />

extent with debt, contradicting Pecking Order theory. The authors find a Pecking Order coefficient of roughly 0.28.<br />

Such a coefficient is very small in magnitude and would contradict the predictions of Pecking Order theory.<br />

H9: We expect that the Pecking Order coefficient will be 0.28 over the full sample.<br />

Suppose that two firms exist: A and B. Firm A follows a differentiation strategy and firm B follows a low-cost<br />

strategy. Firm A has a high degree of investments in firm-specific assets, which are not easily collateralized (e.g.,<br />

Titman, 1984; Balakrishnan & Fox, 1993). Moreover, firm A maintains an excess borrowing capacity for future<br />

acquisitions (O’Brien, 2003). Firm B has a much lower degree of firm-specific assets and does not value future<br />

acquisitions that much. Hence, firm A will have a leverage level much lower than firm B. This may happen because<br />

firm A follows a differentiation strategy. Let’s suppose that firm A has a leverage level, measured as debt-over-total-<br />

1 State-owned firms also have the option to issue equity to their state, so that state ownership is actually increased. However, this is not always an<br />

option, as overall shocks to the economy, e.g. financial crises, may also deter states from investing in companies.<br />

358


assets of 40% and firm B has a leverage level, measured exactly in the same way, of 60%. Both firms have a<br />

financing deficit of $1 million in year t. To keep their leverage targets at this point in time, firm A issues $400,000<br />

in debt and $600,000 in common stock. Firm B also intends to keep its leverage level constant and issues $600,000<br />

in debt and $400,000 in common stock. In year t+1, both firms maintain the same leverage levels as in period t (e.g.,<br />

40% and 60% respectively). If the financing deficit methodology will be applied to a sample including firms only<br />

similar to firm A, one may arrive at the conclusion that those two firms finance themselves mostly with equity,<br />

hence the Pecking Order predictions will be rejected. Yet, if the sample includes firms similar to firm B, the findings<br />

may show that the B-like firms mostly use debt to finance their deficits, hence, the Pecking Order theory will be<br />

confirmed. That is, firms A and B simply have different target levels of leverage and finance their deficits<br />

accordingly. Here, we claim that the financing deficit test does not account for the fact that different types of firms<br />

have different levels of leverage. Instead, this test is based on the assumption that all firms have the same target<br />

level. Therefore, we argue that the existing financing deficit test results may be biased because competitive strategy<br />

is not included as a control variable. Following this reasoning, we expect that low-cost firms exhibit more Pecking<br />

Order behavior than firms using a differentiation strategy, hence:<br />

H10: Firms that follow a low-cost strategy exhibit more Pecking Order behavior compared to firms that follow a<br />

differentiation strategy.<br />

Analytical works (e.g., Dewenter & Malatesta, 2001) as well as empirical evidence (e.g., Titman & Wessels,<br />

1988) support that state-owned firms have leverage levels higher than privately-owned firms. Yet, following our<br />

discussion in the previous paragraph (H10), the financing deficit test does not account for these systematic<br />

differences in leverage. Thus, in a sample with a high proportion of state-owned firms, researchers may infer that<br />

firms follow the Pecking Order (high reliance on debt), whereas researchers may come to the completely opposite<br />

conclusion when analyzing a sample including low proportion of state-owned firms. Hence, one may support that<br />

empirical tests may be biased without including state-ownership as a control variable. In line with this reasoning, we<br />

expect that state-owned firms exhibit more Pecking Order behavior than entirely privately-owned firms, hence:<br />

H11: (Partially) state-owned airlines follow mostly a Pecking Order behavior.<br />

3. Sample<br />

To address our research objectives, we use the decision context of the airline industry. We can easily identify<br />

competitive strategy by splitting airlines into two types. First, some airlines offer a full network across the globe<br />

with a high level of service. These airlines follow a differentiation strategy. Second, many airlines offer point-topoint<br />

service at very low costs and relatively low levels of service. These airlines follow a low-cost strategy. 2<br />

Moreover, many firms in the airline industry have a history of state ownership, while most of these firms have<br />

recently been (partially) privatized. Since many governments still hold large stakes in their national airlines, the<br />

airline industry seems to be a prominent decision context for testing the impact of state ownership on capital<br />

structure decisions.<br />

3.1 Data<br />

We used the database “Company.Info” that provides financial information and industry-specific overviews.<br />

Specifically, we selected all firms classified under Transport – Air Transport – Airlines (63 firms). These 63 firms<br />

represent all air transportation firms. We excluded all firms involved only in helicopter or cargo transportation.<br />

Furthermore, we excluded those firms that are not listed in Compustat, assuming that these firms do not have any<br />

financial data publicly available. Our final sample includes 44 airline firms. We obtained quarterly and yearly<br />

accounting data from Compustat. Because data for non-American firms has been available only since 1998, we<br />

restricted our data collection from 1998 to 2009. To complete missing data we also used DataStream and Thomson<br />

One Banker. In the case of missing data records, we reported the data as missing or, if only one observation is<br />

missing, we interpolated between the previous and following observations.<br />

To control for competitive strategy, we followed Kalogeras et al. (2009)’s classification scheme closely. We<br />

assigned a dummy variable to each airline indicating whether it follows a differentiation strategy (full-service<br />

2 Strictly speaking, there are regional airlines, which may follow a different strategy than differentiation or low-cost. However, these airlines are<br />

often relatively small-sized in terms of revenue and market capitalization. Thus, we exclude these companies from our analysis.<br />

359


airlines offering a wide network and high customer service) or low-cost strategy (low-cost airlines offering point-topoint<br />

service at low customer services with a focus on offering low prices). With respect to ownership type, we<br />

followed the classification of Jenkinson (1998): we define ownership categories asymmetrically to account for the<br />

fact that a relatively small governmental stake in an otherwise privately-owned airline may carry significant veto<br />

power, while the same is not necessarily true for a state-owned airline in which a private investor holds an<br />

equivalent stake. Thus, we consider airlines to be privately-owned only in cases where private parties hold 99% or<br />

more of the airline’s outstanding voting stock. In contrast, we categorize an airline as a purely public sector-owned<br />

airline if a state, or several states, own 95% or more of its equity. All other airlines are categorized as “mixed<br />

ownership” (Backx, et al. 2002). To obtain our data using this classification procedure, we used annual reports of<br />

the selected airlines. Apparently, our sample only contains airlines with private and mixed ownership (see Table 2).<br />

This finding may highlight the fact that there are no airlines with pure state ownership anymore due to the<br />

continuous liberalization of the airline markets during the last 30 years.<br />

The airlines in our sample have a mean ratio of total liabilities to assets of 70.8%, which is four percentage<br />

points higher than the mean of this ratio over all industries in the US, but one percentage point lower than the mean<br />

of this ratio in Germany (Rajan & Zingales, 1995). When looking at long-term debt as a measure of leverage, we<br />

recognize that airlines tend to be highly leveraged: 29.9% long-term debt relative to assets is higher than the mean of<br />

this ratio in the US and much higher than the mean of this ratio in Germany. Thus, the selected airlines seem to be<br />

relatively high-leveraged. The selected airlines tend to have very low market valuations. This is most probably due<br />

to low growth opportunities. Airlines in our sample have a very low (2.70) market-to-book ratio (M/B). The<br />

standard deviation for M/B is relatively high, indicating large dispersion in valuations. Furthermore, we observe an<br />

average profitability of only 2.1%, which may explain the low market valuations. Finally, the descriptive<br />

information presented in Table 3 indicates that the average sales of an airline are roughly $4 billion, and its median<br />

sales are around $1 billion.<br />

4. Empirical Modeling Approach<br />

4.1 Cross-sectional correlation<br />

For the cross-sectional analysis we defined leverage as long-term debt divided by total assets. We expected longterm<br />

debt to be a more precise measure of corporate leverage than, say, total liabilities, since total liabilities include<br />

many items, such as accounts payable, which are used for transaction purposes and do not necessarily reflect<br />

corporate leverage. We used the book value of long-term debt as market values are not available for all firms (Frank<br />

& Goyal, 2008). Bowman (1980) finds that no large differences exist between using market and book values, since<br />

correlation between the two measures is very high. Following Rajan and Zingales (1995) closely, we specified our<br />

empirical model for testing the cross-sectional correlations as follows:<br />

M<br />

Leverage i = α + β1<br />

* + β2<br />

* Size + β3*<br />

Pr ofitability<br />

+ β4<br />

* Stategy + β5<br />

* Ownership + εi<br />

B<br />

The market-to-book ratio is defined as market value of equity over book value of equity, each measured at the end<br />

of every quarter. We used profitability in our cross-sectional model to examine airlines’ ability to serve interest<br />

payments and hence to capture the financial-related risks. Therefore, we used quarterly operating cash flows<br />

normalized by the book value of assets as a proxy. Although this measures cash flows rather than profits, we<br />

followed the rationale of Rajan and Zingales (1995) and considered that it is a better measure to capture debt-serving<br />

ability of the firms in our sample. We also used the natural logarithm of quarterly sales in US dollars for capturing<br />

the effect of Size. Next, we expanded the specification of Rajan and Zingales’s (1995) modeling framework by<br />

accounting for the influence of competitive strategy and ownership structure on airlines’ leverage.<br />

4.2 Trade-off theory: Target Adjustment<br />

We applied the target adjustment model introduced by Shyam-Sunder and Myers (1999). In this model, we checked<br />

whether the net changes in debt exhibit a pattern that brings firms closer to a predefined leverage target. In formulae:<br />

*<br />

ΔD = α+ β (D - D ) + ε<br />

it<br />

ta<br />

where: ΔDit is the change in debt from period t-1 to period t for firm i, D * it is the target debt level for firm i and Dit-1<br />

is the leverage in period t-1 for firm i. In other words, we are testing whether new debt issues are used to move the<br />

leverage level back to a predefined target. If target adjustment takes place, βta should be larger than 0, but smaller<br />

360<br />

it<br />

it-1<br />

it


than 1, since we expect significant adjustment costs that prevent perfect adjustment. Consistent with prior research<br />

findings (e.g., Jalilvand & Harris, 1984), we selected the average leverage of firm i over our entire sample period as<br />

the leverage target for firm i. Usage of other targets (e.g., rolling averages) provided similar results. We also chose<br />

the proportion of long-term debt to book value of assets as our proxy for corporate leverage (see: Shyam-Sunder &<br />

Myers, 1999). We treated all observations as equally important (Frank & Goyal, 2003).<br />

4.3 Pecking Order: Financing Deficit<br />

Building on Shyam-Sunder and Myers (1999) and Frank and Goyal (2003), we specified a model to examine<br />

whether the financing needs of a company are met by new debt issues, as predicted by the Pecking Order. We<br />

specified the financing needs, or the financing deficit, of an airline firm as follows:<br />

D EFt<br />

= DIVt<br />

+ CapExt<br />

+ ΔWorkingCapital<br />

t - OperatingCashFlows<br />

t<br />

In line with this model’s specification, the financing deficit of an airline grows larger when more dividends are paid,<br />

more cash flows are used for capital expenditures, and working capital is increased. The financing deficit decreases<br />

when operating cash flows are higher. Our dataset contained the items’ dividends, capital expenditures and operating<br />

cash flows. To estimate the change in working capital we used the common definition of working capital as current<br />

assets minus current liabilities. 3 We examined whether this financing deficit is filled by new debt issues by applying<br />

the model of Frank and Goyal (2003). Formally:<br />

Δ D = a + β * DEF + εi<br />

it<br />

po<br />

where: ΔDit represents the change in debt from one period to the next. The ΔDit should be the amount of new debt<br />

issued when DEF is negative and the amount of old debt retired when DEFit is positive (financing surplus). If βPO<br />

equals exactly one, then it implies that the financing deficit always corresponds to entirely new debt issues or<br />

retirements. That is, when there is a financing gap of $100 million in year t, then a company would issue exactly<br />

$100 million to finance that gap. Naturally, it must be smaller than one because firms also use equity to close<br />

financing gaps. However, if βPO is reasonably close to one, then this would favor the Pecking Order explanation of<br />

capital structure behavior. Since new debt issues are unavailable for non-US firms, we estimate the amount of debt<br />

issued or retired in any year as the difference between the level of long-term debt between one year and the next for<br />

the airlines included in our sample. To be consistent with the target adjustment model, we also followed Frank and<br />

Goyal (2003) and used simple regressions. Finally, we used a truncation procedure at the 4.5% level for the<br />

exclusion of outliers.<br />

5. Results<br />

5.1 Cross-sectional analysis<br />

First, we checked whether differentiated airlines are less leveraged than low-cost airlines and whether state-owned<br />

airlines are more leveraged than private-owned airlines. The results in Table 1 reveal that differentiated airlines are<br />

13 percentage points more leveraged than low-cost airlines in terms of total liabilities over assets. The findings with<br />

respect to the long-term debt over assets, however, show that differentiated airlines are roughly four percentage<br />

points less leveraged than low-cost airlines. This finding confirms our H1. We rather consider that long-term debt<br />

over assets is the more useful measure, since it excludes items used for ongoing business such as supplier credit and<br />

it is focused on the pure debt portion in balance sheets. However, it is not quite clear what causes the ratio of total<br />

liabilities over assets to be substantially higher among differentiated airlines compared to low-cost airlines. That is,<br />

we partially confirm H1: differentiated airlines are less leveraged than low-cost airlines in terms of long-term debt<br />

over assets.<br />

3 For non-US companies this is the most precise measure available to us. To be consistent we also apply this measure to US firms.<br />

361<br />

it<br />

t


Shown values are averages for the individual groups of the average ratios for each airline over the entire sample<br />

Competitive Strategy Ownership<br />

Differentiation Low-cost Private Mixed<br />

Total liabilities / Assets 73.55% 60.80% 72.84% 69.11%<br />

Long-term debt / Assets 29.64% 33.31% 32.24% 27.46%<br />

Table 1– Descriptive Statistics of Airline’s Competitive Strategy and Ownership Structure<br />

The results regarding ownership structure do not confirm that state-owned firms should be more leveraged than<br />

privately-owned airlines (H2). Privately-owned airlines are more leveraged than airlines with mixed ownership<br />

structure, regardless of which measure is used. This is a striking result since state-owned airlines can obtain debt<br />

cheaper. We may only assume that there is a correlation between state-owned airlines and airlines that follow a<br />

differentiation strategy. This correlation effect might have caused this pattern in long-term debt. That is, we examine<br />

empirically our intuition; indeed, ownership and competitive strategy are correlated significantly. 4<br />

Next, we performed cross-sectional regressions for determining the leverage of airlines with the M/B,<br />

profitability and size of the selected airlines. The results of these regressions are presented in Table 2.<br />

Variable Meaning Significance Coefficient Stand. Error t-statistic p-value<br />

α Constant *** 0.50 0.04 14.30 0.00<br />

β1 M/B ** 0.01 0.00 2.20 0.03<br />

β2 Profitability *** -1.57 0.27 -5.91 0.00<br />

β3 Size *** -0.03 0.01 -5.58 0.00<br />

Legend: * = 10% significance, ** = 5% significance, *** = 1% significance<br />

Leveragei = α + β1 * M/B + β2 * Profitability + β3 * Size + εi<br />

Dependent variable: Long-term debt / Assets<br />

R 2 = 0.122 ; Adjusted R 2 = 0.118<br />

Table 2 – Results of Cross-sectional Analysis<br />

In contrast to other studies (e.g. Rajan & Zingales, 1996; Barclay, Morellec & Smith, 2006), our results suggest that<br />

the market-to-book ratio is positively related to leverage. This would imply that firms with more growth<br />

opportunities have higher leverage than firms with fewer growth opportunities. This explanation is consistent with<br />

Pecking Order Theory: higher growth opportunities increase the need for external finance. Although the coefficient<br />

is significant, the impact on leverage is very small in magnitude: an increase in the market-to-book ratio from 1 to 2<br />

would only lead to a 1-percentage point increase in leverage. Based on this finding, we argue instead that the<br />

economic impact of market-to-book on leverage of airlines is virtually zero.<br />

The direction of profitability is consistent with our hypothesis, meaning that airlines with higher profitability<br />

tend to have lower leverage. This result clearly favors Pecking Order descriptions of capital structure (H3):<br />

profitability is negatively related to leverage. It should also be mentioned that the magnitude of the profitability<br />

coefficient is by far the highest compared to the other coefficients. A 1% increase in profitability lowers leverage by<br />

1.57 percentage points. Hence, the leverage of airlines is related significantly and very strongly to their profitability.<br />

Finally, we find a negative sign for the size coefficient. Since size is measured as a logarithmic variable, a 1%<br />

increase in the sales of an airline results in a lower leverage level 0.0003 percentage points. An increase in sales of<br />

50% would lead to a decrease in leverage of 0.0015 percentage points. This impact on leverage seems rather low to<br />

infer any economically meaningful impact on an airline’s leverage in reality. Hence, it seems that the leverage of<br />

airlines is fairly independent of size.<br />

4 coefficient = -0.342, p-value < 0.00<br />

362


Finally, we re-performed our regression by also accounting for the two control variables (i.e., strategy and<br />

ownership) in order to determine an airline’s leverage.<br />

Leveragei = α + β1 * M/B + β2 * Profitability + β3 * Size + β4 * Strategy + β5 * Ownership + εi<br />

Base level Strategy = Differentiation<br />

Base level Ownership = Private<br />

Variable Meaning Significance Coefficient Stand. Error t-statistic p-value<br />

α Constant *** 0.51 0.04 14.70 0.00<br />

β1 M/B 0.00 0.00 0.91 0.36<br />

β2 Profitability *** -2.07 0.26 -8.06 0.00<br />

β3 Size *** -0.02 0.01 -4.94 0.00<br />

β4 Strategy 0.02 0.01 1.39 0.17<br />

β5 Ownership *** -0.09 0.01 -8.18 0.00<br />

Legend: * = 10% significance, ** = 5% significance, *** = 1% significance<br />

Dependent variable: Long-term debt / Assets<br />

R 2 = 0.231 ; Adjusted R 2 = 0.225<br />

Table 3 – Results of Cross-Sectional Analysis including Airlines’ Strategy & Ownership Type<br />

The R² almost doubles from 12.2 (see Table 2) to 23.1%. This increase in explanatory power provides some support<br />

for our intuition that our two control variables are important in explaining capital structure choices. Yet, we find no<br />

significant results for the impact of the market-to-book ratio when we control for airlines’ strategy and ownership<br />

type. This finding confirms our intuition that the market-to-book ratio is not important in explaining capital structure<br />

in our sample. Furthermore, the impact of size decreases in magnitude, but it remains significant. Finally, the<br />

coefficient for profitability is now even more pronounced, indicating a leverage decrease of 2.07 percentage points<br />

for every 1 percentage point increase in profits. In contrast to the results based on descriptive statistics (see Table 4),<br />

we observe no difference in the level of leverage according to competitive strategy. This is another striking result,<br />

since we used long-term debt over assets as our dependent variable. The descriptive statistics also indicated that<br />

differentiated airlines are less leveraged than low-cost ones. We hypothesize that the competitive strategy dummy is<br />

not significant because the ownership dummy is included at the same time and these two variables are significantly<br />

correlated. Indeed, the ownership dummy is highly significant and suggests that airlines with mixed ownership have<br />

a leverage level 9 percentage points lower than private airlines. In the descriptive statistics, we were only able to<br />

detect a difference of roughly 5 percentage points. 5 This finding is not consistent with prior research which argued<br />

that state ownership of firms should lead to higher leverage (e.g., Titman & Wessels, 1988; Dewenter & Malatesta,<br />

2001).<br />

5.2 Target Adjustment<br />

We applied Shyam-Sunder and Myers (1999)’s method/test to examine target adjustment without accounting for<br />

airlines’ strategy and control type. A target adjustment coefficient of 1 would indicate perfect adjustment, so that<br />

each firm is at its target every single year. The results demonstrate stronger target adjustment processes (target<br />

adjustment coefficient = 0.41) than those indicated by Shyam-Sunder and Myers (1999) (target adjustment<br />

coefficient = 0.28). This finding is in favor of Trade-off explanations for capital structure since Pecking Order<br />

theory does not have an explanation even for these moderate adjustment processes. Next, we included interaction<br />

terms between the two control variables and the target adjustment coefficient in our empirical model. This enabled<br />

us to examine whether differences exist regarding the adjustment speed between differentiated and low-cost airlines<br />

and between privately and partially state-owned airlines. First, we observe that the explanatory power of our model<br />

increases by almost ten percentage points. Again, we may interpret this as providing some evidence that our two<br />

new variables contribute to a more comprehensive model. Second, the target-adjustment coefficient has a value of<br />

around 0.28, which indicates moderate target adjustment processes and it is roughly twice as large as the one found<br />

5 This may depend on the measurements used in our descriptive statistics: We used an average of the average leverage of each airline in the<br />

descriptive statistics, whereas in the regression we treat each quarterly data separately.<br />

363


y Shyam-Sunder and Myers (1999). Third, the findings show that neither the strategy nor the ownership dummy is<br />

significant. This means that no significant differences exist due to competitive strategy, followed by the airlines and<br />

their ownership type regarding the level of new debt issued or retired every year. Fourth, the interaction term of<br />

competitive strategy and the adjustment coefficient is insignificant. As hypothesized, there appears to be no<br />

difference in target adjustment behavior between airlines with a differentiation and low-cost strategy.<br />

ΔDit = α + bta (D*it-Dit-1) + β1 * Strategy + β2 * Ownership + β3 * Strategy * (D*it-Dit-1) + β4 * Ownership * (D*it-Dit-1) + eit<br />

Base level Strategy = Differentiation<br />

Base level Ownership = Private<br />

Variable Meaning Significance Coefficient Stand. Error t-statistic p-value<br />

α Constant ** 85.13 40.47 2.10 0.04<br />

bta Target adjustment coefficient *** 0.28 0.04 7.71 0.00<br />

β1 Strategy 5.28 80.19 0.07 0.95<br />

β2 Ownership -95.42 61.83 -1.54 0.12<br />

β3 Interaction with Strategy -0.08 0.24 -0.32 0.75<br />

β4 Interaction with Ownership *** 0.44 0.07 6.74 0.00<br />

Legend: * = 10% significance, ** = 5% significance, *** = 1% significance<br />

R 2 = 0.430 ; Adjusted R 2 = 0.421<br />

Table 4 – Target Adjustment Test including Control Variables and Interaction Terms<br />

Finally, the interaction term of ownership structure and target adjustment is significant and very large. For<br />

interpretation, the interaction term and the target adjustment coefficient need to be added. Hence, airlines with<br />

partial state ownership revert back to their leverage target much more quickly than privately-owned airlines<br />

(adjustment coefficient = 0.72). 6 To the best of the authors’ knowledge, there is no underlying theoretical<br />

explanation regarding the impact of airlines’ state-ownership status which should lead to stronger target adjustment.<br />

It could be that when companies move away from their leverage target, they often issue more debt. Consequently,<br />

there is a need to rebalance capital structure by issuing equity. This new equity issue is not in the interest of a state<br />

that already holds shares in an airline. It may be that either new investors come in, and hence the state ownership<br />

will be diluted or the state has to buy the issued equity. That is, the state will have to pay for the re-adjustment to the<br />

target level either by diluted ownership or by buying up the issued shares.<br />

5.3 Financing Deficit<br />

We performed the original financing deficit test without additional control variables to check if we observe the same<br />

behavior as Shyam-Sunder and Myers (1999) and Frank and Goyal (2003) observed in their respective studies.<br />

Based on the results fo this test our Pecking Order coefficient is 0.53 with an R² of 38.3 percent. The estimated level<br />

of this coefficient implies that only 53% of the financing deficit of airlines may be “filled in” by new debt issues.<br />

Shyam Sunder and Myers (1999) find a coefficient of 0.85 (R² = 86%), but restrict their sample to firms for which<br />

continuous data is available. Frank and Goyal (2003) find a coefficient of 0.28 (R² = 27%) by expanding the sample<br />

of Shyam-Sunder and Myers (1999) to include firms with reporting gaps. Thereby, our coefficient’s level stands in<br />

the middle of those two prior estimations, both in terms of magnitude and explanatory power. Our findings do not<br />

reconcile this finding with the Pecking Order behaviour, which predicts that the financing deficit will always be<br />

filled with new debt issues, except when debt capacity is exhausted.<br />

6 This result is computed as follows: the sum of the general target adjustment coefficient (0.28) + the interaction term (0.44) = 0.72.<br />

364


ΔDit = α + bpo (DEFit) + β1 * Strategy + β2 * Ownership + β3 * Strategy * (DEFit) + β4 * Ownership * (DEFit) + εit<br />

Variable Meaning Significance Coefficient Stand. Error t-statistic p-value<br />

α Constant *** 152.75 42.07 3.63 0.00<br />

bpo Pecking Order Coefficient *** 0.65 0.05 13.95 0.00<br />

β1 Strategy -7.88 81.85 -0.10 0.92<br />

β2 Ownership ** -71.34 62.08 -1.15 0.25<br />

β3 Interaction with Strategy ** -0.27 0.13 -2.02 0.05<br />

β4 Interaction with Ownership *** -0.45 0.10 -4.72 0.00<br />

Legend: * = 10% significance, ** = 5% significance, *** = 1% significance<br />

Base level Strategy = Differentiation<br />

Base level Ownership = Private<br />

R 2 = 0.439 ; Adjusted R 2 = 0.429<br />

Table 5 – Frank and Goyal (2003)’s Test including Control Variables & Interaction Terms<br />

The results of Table 5 provide challenging insights into financing deficit levels when we account for strategy<br />

and ownership type of the airlines, and their interaction terms are also included. Similarly to our previous test<br />

results, the explanatory power rises by 10 percentage points when we include the control variables. The Pecking<br />

Order coefficient increases to 0.65. Again, the level of this coefficient ranges between the one indicated by Shyam-<br />

Sunder and Myers (1999) and the one by Frank and Goyal (2003). That is, we rather support once more that a<br />

coefficient of 0.65 is hard to reconcile with Pecking Order behavior. Furthermore, the insignificant result for the<br />

Strategy dummy may imply that there is no difference in the level of new debt issues between low-cost airlines and<br />

those that follow a differentiation strategy. Yet, the Ownership dummy equals -71.34 and is significant. This implies<br />

that airlines with state ownership issue roughly $71 million less in new debt than privately-owned airlines in each<br />

year. The state-owned airlines tend to be less leveraged than private ones. Hence, they will issue less new debt each<br />

year to replace maturing portions of debt. We need to be careful regarding interpretation though, because the<br />

financing deficit test does not control for firm size. So, if private airlines are larger in size than state-owned airlines,<br />

it may imply that state-owned airlines issue less debt due to their smaller size.<br />

6. Conclusion<br />

On balance, our results support the intuition that a firm’s competitive strategy and ownership structure have an<br />

influence on capital structure and may bias the results of the existing empirical tests. We used the decision context<br />

of the airline industry. First, we find differences in leverage according to strategy and ownership structure in<br />

descriptive statistics and cross-sectional regressions. Second, we find that partially state-owned airlines exhibit much<br />

stronger target adjustment processes than privately-owned airlines. Third, partially state-owned airlines show less<br />

Pecking Order behavior than differentiated airlines. This result corroborates the intuition that state ownership may<br />

be associated with Trade-off behavior. Fourth, we find that low-cost airlines exhibit less Pecking Order behavior<br />

than differentiated airlines, which suggests a significant role for competitive strategy in capital structure tests. These<br />

findings expand prior research findings on the impact of competitive strategy (Barton & Gordon, 1988;<br />

Balakrishnan & Fox, 1993) and ownership structure (Titman & Wessels, 1988; Dewenter & Malatesta, 2001) on a<br />

firm’s corporate behavior. At a practical level, financing managers and practitioners may recognize the role of<br />

strategy in capital structure decision-making. That is, managers and policy makers in an industry (e.g., airline)<br />

should carefully weigh the benefits of lower interest rates against the reduced ability to issue seasoned equity when<br />

considering state investing behavior in a specific industry. For instance, industry managers may consider the high<br />

costs of information asymmetries emphasized by Pecking Order theory as well as the tax benefit versus bankruptcy<br />

cost argument provided by Trade-off theory.<br />

Several caveats and challenges should be mentioned. First, we used a rather limited sample and focused on only<br />

one industry. Future research may examine the potential biases of existing tests for multiple industries and larger<br />

samples. Second, we used an international sample of airlines that necessitates currency conversions, which<br />

otherwise may have an influence on our results. Finally, the decision context of the airline industry may not be<br />

365


comparable to many other industries because of its high concentration and competitiveness. These unique<br />

characteristics of the airline industry may have been mirrored in the financial data of airlines, hence making it<br />

difficult to generalize our results.<br />

7. References<br />

Ang, J., Chua, J., & McConnell, J. (1982). The Administrative Costs of Corporate Bankruptcy: A Note. Journal of Finance, 37(March), 219-226.<br />

Balakrishnan, S., & Fox, I. (1993). Asset Specificity, Firm Heterogeneity and Capital Structure. Strategic Management Journal, 14(1), 3-16.<br />

Barclay, M., Morellec, E., & Smith, C. (2006). On the Debt Capacity of Growth Options. Journal of Business, 79(1), 37-59.<br />

Barton, S. L., & Gordon, P. J. (1988). Corporate Strategy and Capital Structure. Strategic Management Journal, 9(6), 623-632.<br />

Bowman, J. (1980). The Importance of a Market Value Measurement of Debt in Assessing Leverage. Journal of Accounting Research, 18(1),<br />

242-254.<br />

DeAngelo, H., & Masulis, R. W. (1980). Optimal Capital Structure under Corporate and Personal Taxation. Journal of Financial Economics,<br />

8(1), 3 - 29.<br />

Dewenter, K. L., & Malatesta, P. H. (2001). State-Owned and Privately Owned Firms: An Empirical Analysis of Profitability, Leverage, and<br />

Labor Intensity. American Economic Review, 91 (1), 321-334.<br />

Fama, E., & French, K.R. (2002). Financing Decisions: Who Issues Stock. Journal of Financial Economics, 76 (June 2005), 549-582.<br />

Flannery, M. J., & Rangan, K. P. (2006). Partial Adjustment toward Target Capital Structure. Journal of Financial Economics , 79(3), 469-506.<br />

Frank, M. Z., & Goyal, V. K. (2008). Trade-off and Pecking Order Theories of Debt. In B. E. Eckbo (Ed) Handbook of Corporate Finance:<br />

Empirical Corporate Finance (Vol. 2, Chapter 12), Nortrh Holland-Amsterdam: Elsevier Handbooks in Empirical Finance.<br />

Frank, M. Z., & Goyal, V. K. (2009). Capital Structure Decisions: Which Factors are Reliably Important? Financial Management, 38(1), 1-37.<br />

Frank, M., & Goyal, V. (2003). Testing the Pecking Order Theory of Capital Structure. Journal of Financial Economics, 67(2), 217-248.<br />

Hovakimian, A., Opler, T., & Titman, S. (2001). The Debt-equity Choice. Journal of Financial and Quantitative Analysis, 36(1), 1-24.<br />

Jalilvand, A., & Harris, R. S. (1984). Corporate Behavior in Adjusting to Capital Structure and Dividend Targets: An Econometric Study. Journal<br />

of Finance, 39(1), 127-145.<br />

Jalilvand, A., & Harris, R. S. (1984). Corporate Behavior in Adjusting to Capital Structure and Dividend Targets: An Econometric Study. Journal<br />

of Finance, 39, 127-145.<br />

Jenkinson, T. (1998). Corporate Governance and Privatization via Initial Public Offering. Corporate Governance, State-owned Enterprises and<br />

Privatizations - OECD Proceedings (pp. 87-118), Paris, France: OECD Publishing.<br />

Jensen, M. C., & Meckling, W. H. (1976). Agency Costs and the Theory of the Firm. Journal of Financial Economics , 3, 305-360.<br />

Jordan, J., Lowe, J., & Taylor, P. (1998). Strategy and Financial Policy in UK Small Firms. Journal of Business Finance and Accounting, 25, 1-<br />

27.<br />

Kalogeras, N., Labryga, T., Kruse, S., and J. Semeijn. (2010). Determining Capital Structure Choices in Transportation Industry: A Comparison<br />

Among the Main Theoretical Angles. International Conference on Global Trends in the Efficiency & Risk Management of Financial<br />

Services, EURO-Working Group on Efficiency and Productivity Analysis (EWG-EPA), Chania, Crete, July, 2-4, 2010, Greece, EU, pp.<br />

52.<br />

Kraus, A., & Litzenberger, R. H. (1973). A State-preference Model of Optimal Financial Leverage. Journal of Finance, 28, 911-922.<br />

Modigliani, F., & Miller, M. H. (1958). The Cost of Capital, Corporation Finance and the Theory of Investment. The American Economic<br />

Review, 48, 261-297.<br />

Modigliani, F., & Miller, M. H. (1963). Corporate Income Taxes and the Cost of Capital: A Correction. The American Economic Review, 53, 433<br />

- 443.<br />

Myers, S. C. (1984). The Capital Structure Puzzle. The Journal of Finance, 39, 575 - 592.<br />

Myers, S. C., & Majluf, N. S. (1984). Corporate Financing and Investment Decisions when Firms have Information that Investors do not have.<br />

Journal of Financial Economics, 13, 187 - 221.<br />

O'Brien, J. P. (2003). The Capital Structure Implications of Pursuing a Strategy of Innovation. Strategic Management Journal, 24, 415-431.<br />

Rajan, R. G., & Zingales, L. (1995). What Do We Know About Capital Structure? Some Evidence From International Data. Journal of Finance,<br />

50, 1421-1460.<br />

Shyam-Sunder, L., & Myers, S. C. (1999). Testing Static Trade-off Against Pecking Order Models of Capital Structure. Journal of Financial<br />

Economics, 51, 129 - 244.<br />

Titman, S. (1984). The Effect of Capital Structure on a Firm's Liquidation Decision. Journal of Financial Economics, 13(March), 137-151.<br />

Titman, S., & Wessels, R. (1988). The Determinants of Capital Structure Choice. The Journal of Finance, 43(1), 1 - 19.<br />

Warner, J. (1977). Bankruptcy Costs: Some Evidence. Journal of Finance, 32(2), 337-347.<br />

Williamson, O. (1988). Corporate Finance and Corporate Governance. Journal of Finance, 43(3), 567-591.<br />

366


DEUNDERSTANDING THE DIFFERENCE IN PREMIUM PAID IN ACQUISITION OF PUBLIC<br />

COMPANIES<br />

Nahum Biger 1 and Eli Ziskind<br />

Abstract. The variance in acquisition premium of different companies is driven by the difference in the set of factors mentioned<br />

above, between different acquisition targets.Understanding the drivers of acquisition premium variance is an important subject in<br />

investment theory as it allows investors seeking to invest in publicly listed companies, undergoing or likely to undergo merger or<br />

acquisition processes, to better analyze the effect of such processes on a target's share price (due to potential premium to be<br />

introduced).This research is aimed at explaining the variance in premium paid in acquisition transactions. It breaks down the<br />

transaction data set into geographic regions, as well as by identifying specific variables that have a significant correlation with<br />

premium variance.This research suggests a different approach in analyzing the variance of premium paid in acquisition transactions by<br />

suggesting a multi-variable regression analysis which divides the data set into geographic regions. A sample of 607 mergers<br />

completed between December 1995 and September 2009 was studied.<br />

1. Introduction<br />

Since the early 1980s the number of merger and acquisition transactions has gradually increased. Companies have<br />

been acquired by private individuals, funds, competitors etc.<br />

An acquisition price is typically driven by multiple factors, comprising among other: projected and historic<br />

financial performance, anticipated value of synergies between the acquired company (“target”) and the acquirer, the<br />

stake that is being acquired (namely, whether it is a control stake or not), market share and technological advantage<br />

of the target, etc.<br />

Acquisition premium is normally referred to as the difference between the market value of a target’s share<br />

capital and the price paid for the share capital by an acquirer of the shares.<br />

The purpose of the research is to try and explain part of the variance in acquisition premium paid in different<br />

acquisition transactions using a multi-variable regression analysis and to opine on whether the variance could be<br />

better explained while breaking the dataset into transactions origin in different regions.<br />

Acquisition premium is a well debated subject in the economic literature. Many researches have elaborated on<br />

their view of how to measure an acquisition premium in the past. Leonce, Bargeron, Frederik, Schlingemann, Stulz<br />

and Zutter (2008) measure the premium by the difference between the price paid for the share capital of a target and<br />

the market value of a target’s share capital as of 3 days, or 42 days prior to the acquisition announcement;<br />

Gaspara,Massab and Matosb (2005) choose the days to be 63 and 123 days prior to the acquisition announcement;<br />

Lammanen (2007) uses 1 day, 1 week and 1 month prior to the announcement date; Porrini (2006) refers to 4 weeks<br />

prior to the acquisition announcement, and Song and Ralph (2000) defined acquisition premium as the difference<br />

between the highest price paid per share and the target share price four weeks prior to the announcement date<br />

While different researches have used different definitions for acquisition premium, the majority seem to be<br />

using a premium calculated as the difference between the share price paid by the acquirer for the target’s shares and<br />

the average market price of the target share measured by a time range of between 4 and 8 weeks prior the<br />

announcement date.<br />

Thus, researchers disagree about the time frame on which a premium is measured. While normally a researcher<br />

would tend to compare the acquisition price to the latest market price available, prior to the transaction, doing so<br />

could bias the results and provide an indication of a lower premium.<br />

1 Carmel Academic Center, Haifa, Israel. The paper is based on Mr. Ziskind’s thesis at the University of Haifa, Haifa, Israel<br />

367


The reason for that could be the effect information leakages may have on the share price of a target. As a<br />

transaction is getting close towards an announcement, individuals comprising, among others, employees of the target<br />

and acquirer, consultants, professional service providers etc. are becoming aware of the anticipated announcement<br />

and could leak information to the market. Hence, the longer time frame (prior to the announcement) a premium<br />

would be applied on, the lower the effect of information leakages will be. However, taking a too long time frame for<br />

premium calculation would be inaccurate as well due to the outdated market price (which will not reflect the effect<br />

of recent corporate announcement by the target to the market).<br />

In order to minimize the likelihood of using a biased acquisition premium and due to the limited available<br />

information, we have calculated acquisition premium based on the difference between the highest price paid per<br />

share within an acquisition of a target and the target share price four weeks, one week and one day prior to the<br />

announcement date.<br />

Prior research into the premium variance include Bargeron, Schlingemann, Stulz and Zutter (2008) who found<br />

that premium paid in acquisitions were 35% higher when the acquirer was a public, rather than private, company.<br />

The authors have not explained this finding. They also found that premium paid by strategic acquirers (corporate<br />

companies or “corporates”) were 63% higher than those paid by financial acquirers (private equity firms\funds).<br />

They suggested that corporates pay more for acquisitions as they expect to benefit from synergies with their existing<br />

activities rather than financial acquirers which usually have no synergies with their existing activities.<br />

Moeller, Schlingmann, and Stulz (2004) report that large firms tend to pay higher premium (in percentage of<br />

acquisition price) relative to smaller size firms. They conjecture that larger firms could realize more synergies from<br />

acquisitions due to the usage of larger integration teams or having more experience in synergies realization.<br />

Hanouna, Sarin, and Shapiro (2001) found that higher percentage premium was paid in acquisition of control<br />

transactions. They explain that an acquirer is probably willing to pay a higher price when acquiring control as it<br />

provides the acquirer with better access to the target as well as ability to better utilize the target’s resources and<br />

realize synergies. They also found that 30% higher premium was paid in US transactions (i.e. both the target and the<br />

acquirer are US based) relative to cross-border transactions.<br />

Former researchers have found that the size premium paid in acquisitions is driven by the type of acquirer,<br />

general economic conditions (e.g. under active merger & acquisition- “M&A”) and market conditions. Premium<br />

tend to be higher due to the general positive prospect of financial results (of the targets) and the competitive bid<br />

atmosphere, size (measured by market capitalization or revenues for example) of the acquirer and the percentage<br />

acquired in the target (in particular, whether control has been acquired).<br />

It I clear that the variance in acquisition premium percentage is driven by multiple factors. In addition,<br />

transactions happening in different global regions might be completed with different premia as regions tend to differ<br />

from one another in terms of the level of development, competition, opportunities, projected growth etc.<br />

For example a retailer operating in a stable western market such as France, would provide its potential acquirer<br />

with a relatively clear growth prospect. Revenues of such a company would normally tend to be stable at the long<br />

run as they are closely correlated with the steady long term growth of the population and GDP per capita. A retailer<br />

based in India however, could provide its potential acquirer with a more rapid growth profile due to the fast growing<br />

population rate and constantly increasing GDP per capita. It would also tend to be associated with a higher risk level<br />

due to the unstable market environment and lack of market “education” or “tradition”.<br />

The pitfalls of measuring the variance in acquisition premium are that having too many variables could lead to<br />

biased as well as non-significant results. In addition there might be some autocorrelation that might obscure the<br />

results.<br />

Another problem is the lack of variables that represent insider information about the target, held by the acquirer,<br />

ahead of the contemplated transaction. Such information could comprise, among others, contracts with major clients,<br />

intellectual property rights and any otherinformation collected by the potential acquirer during the due diligence<br />

process or along the transaction which will enable the acquirer to evaluate the size of synergies, as well as added<br />

value contributed by other factors. Such a variable does not exist and could not be represented by a proxy.<br />

368


The pitfalls mentioned below have been avoided by:<br />

� Using the “step wise routine” to eliminate variables that do not provide further explanation to the regression’s<br />

results and hence “maximizing’ the R-squared of the regression<br />

� The inability to find a variable that provides details on the “non-publicly available” information could be offset<br />

by the fact that a final price, achieved in an acquisition transaction, comprises multiple factors and is being<br />

finally determined by negotiations which usually tend to be bounded by historic premium paid in similar<br />

transactions or in transactions completed in specific regions throughout a given time frame, which lowers the<br />

magnitude of missing a variable from the regression equation.<br />

The study provides a unique insight to the existing literature on global merger & acquisition transactions as it<br />

will use the “step wise routine” method in order to try and explain the variance in premium paid in M&A<br />

transactions by using multiple variables as well as try opine on whether the variance could be better explained by<br />

breaking the dataset into transactions completed in different geographic regions.<br />

2. Research Hypotheses<br />

1. Variance in acquisition premium could be better explained in specific geographies, relative to global data set.<br />

2. Variance in acquisition premium is positively correlated with: change of control, revenue growth, EBITDA<br />

growth, use of advisors and deal attitude.<br />

3. Variance in acquisition premium is negatively correlated with: target LTM return, implied equity to book<br />

value, the ratio between enterprise value and the last twelve months (“LTM”) revenues, the ratio between<br />

enterprise value to LTM EBITDA and the ratio between market capitalization and net income.<br />

3. Data description<br />

Data base: transaction database with the following characteristics:<br />

Transactions sourced out of M&A Monitor service, CapitalIQ website, SDC data-source and Reuters, including data<br />

on 607 merger and acquisition transactions that have been announced and completed (i.e. not “pending”) between<br />

December 1995 and September 2009; Target companies that have been listed for trade on public stock exchanges.<br />

Transactions in which a significant minority stake or control have been transferred to the acquirer;<br />

Transactions that have been completed in the following regions: the Americas (North and South), Europe, Asia<br />

Middle East and Africa (“MENA”); Transactions in which premium could be identified for 1-day, 1-week and 1month<br />

prior to the transaction announcement; Transactions comprising both strategic and financial types of<br />

acquirers;Targets with projected revenues, EBITDA and net income for 1 and 2 years after the date of the<br />

acquisition<br />

Dependent variables: percentage premium paid, measures trice: One month prior, one week prior and one day prior<br />

to the announcement of the transaction.<br />

Explanatory (independent) variables:<br />

� Transaction value – represents the value of the target in millions of US$ as of the transaction date. Transaction<br />

value should theoretically be positively correlated with the absolute value (in millions of US$) of premium paid<br />

in a transaction, as premium is paid in percentage of a company’s value. However, as premium is measured in<br />

% rather than in absolute value, transaction value should not be correlated to premium.<br />

� Percentage sought – represents the % of shares (out of the total share capital of the target) that have been<br />

acquired by the acquirer. This variable should be positively correlated with premium size as theoretically, the<br />

more percentages an acquirer acquires in a target the higher ability the acquirer will have to implement<br />

synergies and cost cutting processes and increase the return on its investment. Hence percentage variable<br />

represented in percentages should be positively correlated with the premium size.<br />

� Target Market Capitalization transaction date -1 day – represents the market value of the target in millions of<br />

US$ one day prior to the acquisition announcement. This variable should theoretically be positively correlated<br />

with the absolute value (in millions of US$) of premium paid in a transaction, as premium is paid in percentage<br />

369


of a company’s value. However, as premium is measured in % rather than in absolute value, the variable should<br />

not be correlated to premium.<br />

� Target total revenue LTM – represents the revenues of the target in millions of US$ over the last twelve months<br />

prior to the date of the acquisition. This variable should theoretically be positively correlated with the absolute<br />

value (in millions of US$) of premium paid in a transaction, as premium is paid in percentage of a company’s<br />

value and the bigger revenues are, the larger would the company’s value be. However, as premium is measured<br />

in % rather than in absolute value, the variable should not be correlated to premium.<br />

� Target total EBITDA LTM – represents the EBITDA of the target in millions of US$ over the last twelve<br />

months prior to the date of the acquisition. This variable should theoretically be positively correlated with the<br />

absolute value (in millions of US$) of premium paid in a transaction, as premium is paid in percentage of a<br />

company’s value and the bigger EBITDA is, the larger would the company’s value be. However, as premium is<br />

measured in % rather than in absolute value, the variable should not be correlated to premium.<br />

� Target LTM return – represents the change (in percentage) between the target’s share value as of 1 year prior to<br />

the date of the acquisition and the value as of the last day prior to the acquisition date. This variable should<br />

theoretically be negatively correlated with the premium paid in a transaction, as the higher the return is, the<br />

larger has the share price of the target increased of the last twelve months. Hence if a target’s share price<br />

increased significantly, the target would be perceived as less attractive in terms of value and the acquirer would<br />

logically be willing to pay a lower premium for it.<br />

� Revenue growth – represents the change (in percentage) between the target’s revenue over the last twelve<br />

months. This variable should theoretically be positively correlated with the premium paid in a transaction, as the<br />

higher the revenue growth is, the more attractive the target is as it embeds attractive growth opportunity and<br />

value enhancement. Hence, the acquirer will be willing to pay more for a company with a better growth<br />

potential and revenue growth could serve as a proxy for growth prospects.<br />

� EBITDA growth – represents the change (in percentage) between the target’s EBITDA over the last twelve<br />

months. This variable should theoretically be positively correlated with the premium paid in a transaction, as the<br />

higher the EBITDA growth is, the more attractive the target is as it embeds attractive growth opportunity and<br />

value enhancement. Hence, the acquirer will be willing to pay more for a company with a better growth<br />

potential and EBITDA growth could serve as a proxy for growth prospects.<br />

� Last twelve months EBITDA margin– represents a target’s EBITDA margin of revenues (in percentage) over<br />

the last twelve months. This variable should theoretically not be correlated with the premium paid in a<br />

transaction as different businesses have different margin levels and premium paid in their acquisitions should<br />

not be subject to margin.<br />

� Implied equity value to book value – represents a target’s equity value (market value of share capital), divided<br />

by the target’s book value (accounting value of the assets less accounting value of the liabilities) in percentages.<br />

This variable should theoretically be negatively correlated with the premium paid in a transaction as the higher<br />

the ratio is, the higher the target is valued (as the market estimates a premium exists over the book value of the<br />

shares). And should a target be highly valued, normally an acquirer would tend to pay a lower premium on top<br />

of the value.<br />

� Enterprise value to last twelve months revenues – represents a target’s enterprise value (market value of share<br />

capital plus the net debt), divided by the target’s revenues over the last twelve months prior to the acquisition in<br />

percentages. This variable should theoretically be negatively correlated with the premium paid in a transaction<br />

as the higher the multiple is, the higher the target is valued (as the market compensate the target with a higher<br />

valuation relative to its actual financial performances). And should a target be highly valued, normally an<br />

acquirer would tend to pay a lower premium on top of the value.<br />

� Enterprise value to last twelve months EBITDA – represents a target’s enterprise value (market value of share<br />

capital plus the net debt), divided by the target’s EBITDA over the last twelve months prior to the acquisition in<br />

percentages. This variable should theoretically be negatively correlated with the premium paid in a transaction<br />

as the higher the multiple is, the higher the target is valued (as the market compensate the target with a higher<br />

valuation relative to its actual financial performances). And should a target be highly valued, normally an<br />

acquirer would tend to pay a lower premium on top of the value.<br />

� Market capitalization to last twelve months net income – represents a target’s market value of its share capital,<br />

divided by the target’s net income over the last twelve months prior to the acquisition in percentages. This<br />

variable should theoretically be negatively correlated with the premium paid in a transaction as the higher the<br />

multiple is, the higher the target is valued (as the market compensate the target with a higher valuation relative<br />

370


to its actual financial performances). And should a target be highly valued, normally an acquirer would tend to<br />

pay a lower premium on top of the value.<br />

� Use of advisors – represents a dummy variable which is valued 0 when no advisors have been used in a<br />

transaction and 1 when advisors have been used. This variable should theoretically be positively correlated with<br />

the premium paid in a transaction as advisors are being hired by targets in order to assist the target to be sold at<br />

its full potential value (i.e. including maximum premium). Advisors are running auction processes and their<br />

added value is the additional premium paid.<br />

� Majority / minority – represents a dummy variable which is valued 1 when the transaction represents a majority<br />

(over 50% of the shares) transaction and 0 when the transaction represents a minority transaction. This variable<br />

should theoretically be positively correlated with the premium paid in a transaction as majority acquisition<br />

allows the acquirer to consolidate the target’s financial statements into its own and also to more efficiently<br />

execute synergies measures.<br />

� Change of control – represents a dummy variable which is valued 1 when the transaction represents a control<br />

(legal or effective) transfer and 0 when the transaction does not represent a control transfer. It should be noted<br />

that control could be legal (above 50% of the shares in most of the markets or above 30% in the UK) or<br />

effective could be lower than 50% but with rights to appoint board members). This variable should theoretically<br />

be positively correlated with the premium paid in a transaction as majority acquisition should allow the acquirer<br />

to control the target, navigate its activities and implement any measures.<br />

� Deal attitude – represents a dummy variable which is valued 1 when the transaction has been executed at<br />

friendly terms (the board of the target welcomed the acquirer and cooperated during the transaction) and 0 when<br />

the transaction has been executed at a hostile manner (the board of the target did not welcome the acquisition<br />

and the acquirer had to submit a hostile tender offer to the target’s shareholders). This variable should<br />

theoretically be negatively correlated with the premium paid in a transaction as hostile bids are not<br />

recommended by the board of directors of the target, hence the shareholders of the target should be convinced<br />

to actually accept the offer (which often requires the acquirer to pay a significant premium).<br />

Descriptive statistics for the explanatory variables<br />

Geography Global The Americas<br />

Premium 1-month 1-week 1-day 1-month 1-week 1-day<br />

Average 34% 29% 24% 36.8% 31.1% 26.8%<br />

Median 31% 26% 22% 33.1% 28.8% 24.6%<br />

Max 99% 93% 69% 98.8% 92.9% 68.7%<br />

Min 0% 1% (4%) 0.3% 1.1% (1.6%)<br />

STD 19% 17% 16% 19.7% 17.2% 15.8%<br />

Mode 33% 33% 7% 33.3% 33.3% 33.3%<br />

Skewness 0.7 0.6 0.7 0.6 0.6 0.6<br />

Variance 4% 3% 2% 3.9% 3.0% 2.5%<br />

Geography Europe Asia and MENA<br />

Premium 1-month 1-week 1-day 1-month 1-week 1-day<br />

Average 28% 22% 19% 33.0% 27.2% 22.1%<br />

Median 24% 18% 14% 31.7% 26.7% 21.4%<br />

Max 84% 66% 67% 70.9% 72.0% 64.8%<br />

Min 0% 4% 0% 3.3% 5.0% (4.4%)<br />

STD 17% 14% 13% 17.0% 14.8% 14.9%<br />

Mode 24% 17% 7% 33.3% 32.7% 14.4%<br />

Skewness 1.0 0.9 1.1 0.4 0.5 0.8<br />

Variance 3% 2% 2% 2.9% 2.2% 2.2%<br />

4. Overview of the research method<br />

The analysis used OLS regressions under different scenarios and limitations as follows:<br />

� Stage 1 – regressions implemented with and without dummy variables, for each of the three dependent<br />

variables: premium paid measured by 1- month, 1-week and 1-day prior to transaction announcement;<br />

371


� Stage 2 –separate regression broken into 4 geographic regions: global (all regions grouped together), Europe,<br />

North America and ASIA & Middle East & Africa (“MENA”).<br />

� Stage 3 – Step-wise regressions, with and without dummy variables, for the same dependent variables.<br />

� Stage 4 – Step wise regressions, with and without dummy variables, broken into 4 geographic regions.<br />

The table below provides summary results of the regressions.<br />

Region Global Americas Europe Asia/MENA<br />

General data<br />

1-month premia<br />

Average premium 34.4% 36.8% 27.6% 33.0%<br />

Median premium 31.3% 33.1% 23.7% 31.7%<br />

Standard deviation 19.2% 19.7% 17.2% 17.0%<br />

1-week premia<br />

Average premium 28.8% 31.1% 22.1% 27.1%<br />

Median premium 26.4% 28.8% 18.4% 26.7%<br />

Standard deviation 16.6% 17.2% 14.0% 14.6%<br />

1-day premia<br />

Average premium 24.5% 26.8% 18.8% 22.0%<br />

Median premium 22.0% 24.6% 14.5% 20.9%<br />

Standard deviation 15.6% 15.8% 13.3% 14.7%<br />

Regressions statistic information<br />

standard<br />

regression<br />

step-wise<br />

routine<br />

standard<br />

regression<br />

step-wise<br />

routine<br />

standard<br />

regression<br />

step-wise<br />

routine<br />

standard<br />

regression<br />

step-wise<br />

routine<br />

1-month premia<br />

Includding dummy variables<br />

R-squared 7.8% 7.5% 7.1% 5.8% 27.0% 20.4% 33.1% 22.2%<br />

% of significant variables 16.7% 100.0% 11.1% 75.0% – 100.0% 5.6% 66.7%<br />

Excludding dummy variables<br />

R-squared 5.8% 6.1% 6.4% 5.8% 21.8% 15.7% 27.8% 19.0%<br />

% of significant variables 14.3% 83.1% 14.3% 75.0% – 75.0% 14.3% 100.0%<br />

1-week premia<br />

Includding dummy variables<br />

R-squared 8.6% 7.8% 8.5% 6.9% 22.9% 10.1% 32.5% 28.6%<br />

% of significant variables 22.2% 100.0% 22.2% 50.0% – 100.0% 5.6% 83.3%<br />

Excludding dummy variables<br />

R-squared 6.0% 4.7% 6.3% 5.0% 19.3% 10.1% 24.4% 16.7%<br />

% of significant variables 14.3% 60.0% 21.4% 100.0% – 100.0% 14.3% 100.0%<br />

1-day premia<br />

Includding dummy variables<br />

R-squared 11.4% 10.6% 13.9% 11.8% 25.1% 17.4% 33.1% 26.0%<br />

% of significant variables 16.7% 71.4% 27.8% 83.3% 5.6% 100.0% 5.6% 80.0%<br />

Excludding dummy variables<br />

R-squared 9.8% 1.3% 10.6% 8.9% 23.0% 19.1% 32.5% 23.3%<br />

% of significant variables 28.6% – 28.6% 100.0% 14.3% 80.0% 7.1% 80.0%<br />

A few clear insights could be driven from the table above:<br />

1. Generally, standard deviation in premium paid has decreased in the different regions (with the exception of<br />

the Americas) relative to the global results.<br />

2. Regressions run on the global market data set, without breakdown per market, provided less significant<br />

results (lower R-squared) and had a lower number of significant variables with the exception of the<br />

Americas<br />

3. Amongst the regions, regressions implemented on Europe and ASIA & MENA have yielded more<br />

significant results<br />

4. Across the regressions run, those that combined dummy variables, generally had higher R-squared, yet<br />

lower number of significant variables<br />

5. Where regressions seemed more effective (Europe and Asia & MENA) regressions implemented with 1<br />

month premium as the explained variable have provided better results in most cases (higher R-squared)<br />

and had more significant variables.<br />

372


5. Result<br />

Variable Coefficient<br />

Number of times appear in<br />

regressions (out of 48)<br />

% of coefficient regressions<br />

usage of variables<br />

Change of Control (yes=1, no=0) Positive 31 71.0%<br />

Deal Attitude (friednly=1, hostile=0) Positive 26 53.8%<br />

EBITDA growth LTM Negative 34 61.8%<br />

Enterprise value / EBITDA LTM Negative 15 73.3%<br />

Enterprise value / revenue LTM Positive 19 89.5%<br />

Implied Equity Value/Book Value Positive 27 29.6%<br />

LTM EBITDA margin% Positive 30 50.0%<br />

Revenue growth LTM Positive 30 80.0%<br />

Target Enterprise value transaction date Negative 37 83.8%<br />

Target LTM Return % (%) Negative 43 81.4%<br />

Target Market Capitalizatiat transaction date -1 day Negative 28 60.7%<br />

The insights from the table above could be summarized as follows:<br />

1. Change of control has a positive correlation with premium paid in acquisitions. It showed consistent<br />

positive correlation results and mostly the variable was significant<br />

2. Deal attitude had a positive correlation with premium paid (i.e. a friendly deal attitude provided a higher<br />

premium usually).<br />

3. EBITDA growth LTM had a negative correlation with premium paid. This implies that companies with a<br />

higher growth rates received lower premium upon acquisition.<br />

4. Enterprise value / revenue LTA and Enterprise value / EBITDA LTM had opposite correlation results. Both<br />

of these two variables measure the correlation between premium and market valuation. However,<br />

Enterprise value / revenue LTM showed more significant results and was found positive.<br />

5. Implied equity value / book value had positive correlation to premium, effectively implying that companies<br />

with higher market valuations were acquired at higher premium. However, results were not mostly<br />

significant.<br />

6. LTM EBITDA margin had positive correlation to premium paid. Thus, companies with higher margins<br />

receive higher premium upon acquisition.<br />

7. Revenue growth LTM had positive correlation to premium paid. Hence, companies with higher growth<br />

rates receive higher premium upon acquisition.<br />

8. Target enterprise value at transaction date had negative correlation to premium paid, implying that<br />

companies with relative high enterprise value received lower premium. The results appeared in most of<br />

the regressions and had significant values throughout most of the cases.<br />

9. Target LTM return had negative correlation to premium paid, implying that companies whose share price<br />

had increased significantly over the last year received lower premium. The results appeared in most of<br />

the regressions and had significant values throughout most of the cases.<br />

10. Target market capitalization at transaction date – 1day had negative correlation to premium paid, implying<br />

that companies with relative high market value received lower premium. The results did not appear in<br />

all of the regressions and had low significant values throughout most the cases.<br />

6. Conclusions<br />

Hypotheses validation<br />

1. Variance in acquisition premium could be better explained in specific geographies, relative to global data set.<br />

The hypothesis was partially validated by the regression results as the regressions were found more significant and<br />

with more significant variables for Europe and Asia & MENA than for the global data set.<br />

373


The exceptions are transactions in America which came out with poor regression results relative to the global data<br />

set. It may be useful for future researches to focus on the Americas markets and try to further break the data set into<br />

industry segments in order to isolate groups of transactions and reach better results.<br />

The clear conclusion is that while measuring the variance in premium paid in merger and acquisition transactions, it<br />

is important to separate the dataset into geographic regions, rather than to take a global date set. This could be<br />

explained by the different conditions in different geographies that should be taken into account while analyzing the<br />

variance in premium paid for acquisitions such as: political risk, growth rates, risk premium and tax regime (e.g.<br />

more favorable in emerging markets rather than in western economies). In addition there are differences in:<br />

� Profits sharing and ability to take cash out of the country (e.g. corporate companies which acquire Indian firms<br />

often tend to have difficulties to take out dividends outside of India)<br />

� General acquisition preferences for a region (e.g. Asia is considered to be a selected acquisition target for<br />

financial services and insurance companies as the market is considered to be untapped and has a significant<br />

growth opportunities and are hence willing to pay high premium in order to penetrate this market)<br />

Hence, the hypothesis seems to be valid and apply for Europe and Asia & MENA.<br />

It should be noted that the common practice for valuation exercises in professional firms is to compare acquisition<br />

premium on a sector and geographic basis, typically for the abovementioned arguments.<br />

2. Variance in acquisition premium is positively correlated with the following variables: change of control,<br />

revenue growth, EBITDA growth, use of advisors, minority / majority, change of control and deal attitude<br />

� Change of control – we found a positive correlation between change of control and premium paid, confirming<br />

the hypothesis. Additionally, no correlation was found between the percentage sought and premium paid,<br />

indicating that acquirers were willing to pay premium in order to secure control over a target but not to acquirer<br />

more shares (as more shares in a target would not necessarily secure control over a target). Acquirers would not<br />

typically pay a higher premium for more shares at the target. They would only do so if the shares acquired<br />

provide control over the target. The above finding confirms the control premium hypothesis (i.e. premium in<br />

control acquisition transactions should be larger than premium paid in transactions in which control was not<br />

secured).<br />

� Revenue growth – a positive and significant correlation was found between revenue growth and premium paid.<br />

This implies two conclusions: 1. acquirers view revenue growth as a relevant indication for company growth<br />

and value creation (rather than net income, EBITDA or cash flow growth); 2. Acquirers are willing to pay a<br />

premium for a company which showed historic growth (revenue growth was taken as the LTM growth). Hence<br />

the hypothesis was confirmed.<br />

� EBITDA growth – as opposed to the hypothesis above, the hypothesis in this case was denied as a negative<br />

correlation was found between EBITDA growth and premium paid (however, only in 60% of the cases the<br />

correlation was significant). The above could imply that acquirers do not necessarily see EBITDA growth as a<br />

value accretive exercise. On the contrary, acquirers could view companies that focus on EBITDA growth as<br />

more focused on margin improvement rather than securing control over a market (as implied by revenue<br />

growth). The conclusion is that acquirers seems willing to compensate companies for revenue growth and see<br />

such a growth as a value adding phenomenon, yet do not see much of a value added by EBITDA growth.<br />

� Use of advisors – former researches (e.g. Porrini, "Are investment bankers good for acquisition premiums”)<br />

implied that the use of advisors is positively correlated with acquisition premium. We didn’t find a significant<br />

positive correlation between the two variables. The explanation for the difference in results could be driven by<br />

asymmetric information theory: over the last 10 years, with the wider penetration of internet, companies have<br />

much more information on merger & acquisition transactions at their disposal and hence do not need advisors to<br />

better understand acquisition premium paid in similar transaction. In the past, such information was exclusively<br />

at the disposal of advisors and hence a company that entered acquisition negotiations without using advisors did<br />

not have access to such information. As our research is based on more recent transaction database, it is likely<br />

that the access to information factor is already embedded in the premium taken into consideration in the sample<br />

used.<br />

� Deal attitude – in practical discussions, it is often argued that a positive transaction attitude should provide a<br />

higher premium in an acquisition. The logical explanation to the above is that a cooperative management and<br />

374


oard, which are supportive of the transaction, would provide the acquirer better access to information would<br />

work with the acquirer in cooperation in order to achieve the acquisition goals and would hence allow the<br />

acquirer to realize higher synergies from the acquisition. This enables the acquirer to offer a higher premium on<br />

the target. The research confirms the above hypothesis, and the positive correlation assumption.<br />

3. Variance in acquisition premium is negatively correlated with the following variables: target LTM return,<br />

implied equity value to book value, enterprise value to last twelve months (“LTM”) revenues, enterprise value<br />

to LTM EBITDA and market capitalization to net income. One would expect to find a negative correlation of<br />

each of the abovementioned variables with the acquisition premium in a significant amount of regressions run.<br />

� Target LTM return – the hypothesis of negative correlation between target LTM return and premium paid is<br />

confirmed by the regression. The above implies that acquirers are not willing to pay a significant premium for<br />

highly valuated companies. Highly valuated companies are those whose share price has significantly increased<br />

over the last 12 months. The negative correlation was found in most of the regressions (43 / 48) and at a<br />

significance level of 90%. Hence, the conclusion is that acquirers tend to pay a lower premium for companies<br />

whose share price increased over the last 12 months<br />

� Implied equity value to book value – even though the hypothesis above, of a negative correlation between<br />

implied equity value to book value and premium, is confirmed by the regression, the correlation appeared in<br />

less than 40% of the cases and at low significance levels, implying that there is no strong correlation. The above<br />

indicates that acquirers do not view book value to equity value as a proxy for valuation of a target and hence do<br />

not decide on premium size based on that variable<br />

� Enterprise value / revenue – the hypothesis of a negative correlation between enterprise value / revenue and<br />

premium paid is denied for this case. The conclusion of the above is that acquirers mostly do not value targets<br />

based on enterprise value / LTM revenue, otherwise, I would have found a negative correlation which would<br />

imply the low tendency of acquirers to pay a high premium over the shares of a highly valued target<br />

� Enterprise value / LTM EBITDA – the hypothesis of a negative correlation between enterprise value / LTM<br />

EBITDA and premium paid is confirmed for this case. The implied conclusion is that acquirers value targets<br />

based on enterprise value / LTM EBITDA and would not pay a high premium over a target which is highly<br />

valued (i.e. has a high enterprise value / EBITDA ratio). The above conclusion is enhanced by the fact that a<br />

negative correlation between Enterprise value / LTM revenue and premium paid was not found, implying that<br />

enterprise value / LTM EBITDA is a proxy acquirer’s use for valuation purposes<br />

� Market capitalization to net income – the regressions run did not show any correlation between market<br />

capitalization to net income and premium paid. Hence the hypothesis is denied. The conclusion driven is that<br />

acquirers do not view the mentioned variable as a proxy for valuation (hence net income is not seen as a reliable<br />

representative of a company’s business outcome)<br />

4. Variance in acquisition premium is better explained when using 1-month premium, relative to 1-week / 1-day.<br />

The hypothesis above could be verified by consistently finding better regression results for 1-month premium,<br />

relative to 1-week and 1-day<br />

Even though there was no absolute indication of best correlation results for regressions run with 1-day, 1-week or 1month,<br />

in most of the cases, results run on 1-month regressions were more accurate and significant, implying that<br />

the hypothesis was correct.<br />

The abovementioned result is a key issue in merger & acquisition transactions. While a company contemplates an<br />

acquisition of a public company, it usually refers its acquisition proposal to a certain share price of the target.<br />

However, during the negotiations process, leakages of information may occur and would typically tend to push the<br />

share price of the target up, as a potential transaction would normally be executed at a premium.<br />

Hence, a premium is always referred to a share price which is unaffected by any leakages of information and the<br />

longer the time between the share price reference point and the date of the acquisition offer announcement, the less<br />

affected the share price will be.<br />

375


However, taking a share price which is too far away from the announcement would be inaccurate as the share price<br />

will not reflect other corporate activities that happened after the date taken into consideration and could affect the<br />

share price also.<br />

In practice, premium does not tend to be measured at more than 1 month prior to the acquisition date and the typical<br />

timing of measurements is: 1-month, 1-week and 1-day prior to the transaction announcement date. Interestingly in<br />

all regressions results using 1-month premium were more significant and typically had a larger number of significant<br />

variables relative to 1-week and 1-day premium.<br />

5. Other significant findings<br />

� LTM EBITDA margin% – positive correlation of at 5% significance level between LTM EBITDA margin%<br />

and premium paid. The might imply that acquirers view LTM EBITDA margin as a significant indication of a<br />

target’s value. The larger the EBITDA margin is, the higher the premium an acquirer is willing to pay<br />

� Enterprise value and market capitalization: a significant negative correlation was found between the absolute<br />

value of both and premium paid. Hence the larger a size of a company (as reflected in its enterprise value and<br />

market capitalization) the lower % premium an acquirer is willing to pay. The above could be explained by the<br />

fact that the larger a company, the more resources an acquirer will have to raise. Raising significant resources<br />

could negatively affect the acquirer’s ability to offer a premium.<br />

7. References<br />

Bargeron Leonce L., Frederik P. Schlingemann, Rene M. Stulz, Chad J. Zutter, (2008), Why do private acquirers<br />

pay so little compared to public acquirers?, Journal of Financial Economics, vol. 89-2008, pp. 375–390<br />

Jose-Miguel Gaspara, Massimo Massab, Pedro Matosb, (2005), Shareholder investment horizons and the market for<br />

corporate control, Journal of Financial Economics, vol. 76 (2005) pp. 135–165<br />

Moeller, B. Sara, Schlingmann, P. Frederik, Stulz, M. Rene, (2004), Firm size and the Gains from acquisitions,<br />

Journal of Financial Economics, vol. 73 (2004), pp. 201-228.<br />

Moon H. Song and Ralph A., (2000), Abnormal returns to rivals of acquisition targets: A test of the acquisition<br />

probability hypothesis, Journal of Financial Economics, vol. 55, issue 2, pp. 143-171<br />

Porrini Patrizia, (2006), Are investment bankers good for acquisition premiums, Journal of Business Research, vol.<br />

59 (2006), issue 1, pp. 90-99<br />

376


THE SPECIALITIES OF THE SMALL- AND MEDIUM-SIZE ENTERPRISES’ FINANCING AND THE<br />

DETERMINANTS OF THEIR GROWTH IN HUNGARY<br />

Zsuzsanna Széles, PhD,<br />

Institute of Finance and Accounting, Szent István University, Hungary<br />

Email: Szeles.Zsuzsanna@gtk.szie.hu<br />

Zoltán Szabó, PhD<br />

Institute of Marketing, Szent István University, Hungary<br />

Email: Szabó.Zoltán@gtk.szie.hu<br />

Abstract. The most task of the financing politics is the own and foreign resource rate determine. The equity capital is a part of the<br />

company’s capital, which is not to be repaid. The bigger the rate of the company’s equity capital is, the bigger financial independence<br />

it has. A significant rate of the equity capital makes the company more resistant to a crisis. As it is the private property which is<br />

reduced in case of a loss, the company can more easily cope with temporary difficulties if the rate of the equity capital is high. The<br />

primary private resource is the called-up share, which is increased in every year by the retained profit. Self-financing means that the<br />

company’s developments are financed from the company’s own resources.<br />

The SMEs must be facing during their operations the following problems and challenges: functions of the financial financing, low<br />

capitalization, low risk-bearing capacity, chronic underfinancing, liquidity problems, economies of scale and transaction costs, lack of<br />

transparency (lack of clarity), insufficient funds.<br />

Our paper is a brief summary of the theoretical and empirical considerations related to the possible determinants of firm growth with a<br />

special focus on SME firms. After examining the key factors influencing the decision on a company’s capital structure, we present<br />

some facts on the situation of SMEs in Europe and especially Hungary, in order to see what type of governmental programs seem to<br />

be reasonable on economic grounds in fostering the growth of small firms.<br />

Keywords: financing, SME, growth<br />

JEL classification: G2, G3<br />

1. Introduction<br />

The role and importance of the enterprises has increased in the course of the past decades. Before the change of<br />

regime we could witness the expansion of large companies, but after the political turnover in 1989-90 a remarkable<br />

realignment occurred in the size structure of the enterprises. The „reversed pyramid” turned back, and nowadays the<br />

hour-glass effect” is typical. It means that the entrepreneurial sector stands on the wide base of small enterprises,<br />

above this there is a narrower stratum of large companies, while the number of the medium sized enterprises is very<br />

low. In Hungary and in the neighbouring countries, one out of three employees works in a small or medium-sized<br />

enterprise. Approximately 20 million SMEs operate in the EU which contribute to the employment, as well as the<br />

income generation and the export of their country. Far more small enterprises exist after the 2004 enlargement of the<br />

European Union than before. These enterprises play an important role in the preservation of the adaptability and<br />

dynamism of the economy and the intensity of market competition. (Szirmai, 2003)<br />

In our days, large companies cannot be considered as the engine of the economy. Therefore the literature takes<br />

increasing notice of the sector of small and medium-sized enterprises. The importance of this sector lies in the fact<br />

that small enterprises, by virtue of their size, are more flexible, hereby they are able to adapt more quickly to the<br />

changes occurring in the environment, react to the arising threats, and exploit the opportunities residing in them. The<br />

feature which can become an advantage, can be a disadvantage at the same time: in a strong market competition it is<br />

hard to keep up. A remarkable part of the newly established enterprises is not able to strengthen and subsist in the<br />

market in the long run. A lot of enterprises close down in the first few years, and there is a considerable number of<br />

companies that stagnate on a certain level, and are not able to develop. This phenomenon means that the sector of<br />

the medium-sized enterprises is too narrow. There are not enough micro and small enterprises, which are able to<br />

grow in a higher degree and join the medium size category. According to various authors’ opinion, the financial<br />

difficulties are the main barriers to the subsistence, undisturbed operation and growth of SMEs. And the reason<br />

behind this problem is not the lack of capital-owners and creditors.<br />

377


In Hungary in 2008 the number of registered enterprises has exceeded 1.2 million. In previous years, the<br />

business sector structure has changed: in the services sectors we observe growing number of businesses, in<br />

industrial, agricultural and commercial sectors the numbers were decreased. The spatial distribution of firms is the<br />

fact that Budapest and Pest County, has most businesses. The number of firms related to the population is<br />

outstanding in Budapest. In addition, a higher than average proportion of firms is to observe in the Gyor-Moson-<br />

Sopron, Pest counties. In other counties, especially in Borsod-Abaúj-Zemplén, Nógrád, Jasz-Nagykun-Szolnok,<br />

Békés and Szabolcs-Szatmár-Bereg counties, it is lower than the average rate of business. Number of businesses<br />

show that the vast majority, 96% are micro enterprises, with their proportion of the small ones has more than 99%.<br />

The micro and small businesses employ half of the employees, medium firms employ 20 percents of employees.<br />

The micro and small enterprises' share of employment increased slightly, the large companies reduced. In 2007, a<br />

large 37 percent of the total net sales was generated, in the medium enterprises 20, the micro and small enterprises,<br />

43 percent realized. Then the micro and small enterprises in total exports had 22 and 15. The small and mediumsized<br />

structural features of recent years is not, or hardly changed. Their management is characterised with high<br />

labour and low capital-intensity. They have greater share of employment, such as turnover or income production.<br />

2. The specialities of the small- and medium-size enterprises financing<br />

The financing means to get the necessary money for the enterprise operation independent from what the aim of the<br />

capital getting was.<br />

The most task of the financing politics is the own and foreign resource rate determination. It means decision<br />

about resource structure. The equity capital is a part of the company’s capital, which is not to be repaid. It can come<br />

both from external and internal resources: from the registered capital, from the retained profit, from state support,<br />

and it can be increased by issuing new shares, increasing the registered capital, fusing with another company or by<br />

an acquisition. The bigger the rate of the company’s equity capital is, the bigger financial independence it has. A<br />

significant rate of the equity capital makes the company more resistant to a crisis. As it is the private property which<br />

is reduced in case of a loss, the company can more easily cope with temporary difficulties if the rate of the equity<br />

capital is high. The primary private resource is the called-up share (that can be increased with additional external<br />

capital or with other financial contribution), which is increased in every year by the retained profit. Self-financing<br />

means that the company’s developments are financed from the company’s own (private) resources.<br />

The external resource parts are state support (the investment support is part of the own resource – capital reserves),<br />

issue of the new stocks or union with other enterprise.<br />

The financing resources of the SMEs:<br />

1. Internal resource<br />

2. External resource<br />

a/ External debt, non-institutionalized forms of financing<br />

The external debt, financing is a non-institutional involvement with an additional source of funds, which<br />

specializes in financial institutions and non-financial companies we use, but also from people who have<br />

given their principal activity and are not engaged in granting loans.<br />

Groups:<br />

� Ownership or shareholder's loan,<br />

� Family loan and loan form friends,<br />

� Suppliers credit,<br />

� Customer deposit,<br />

� Public warehousing, warehouse receipts.<br />

b / External, capital-related financing options:<br />

� Informal investors,<br />

� Business Angels,<br />

� Risk capital.<br />

c/ External debt, the institutional financing forms:<br />

378


� Bank credit,<br />

� Leasing,<br />

� Factoring.<br />

d/ State involvement in the financing of SMEs<br />

Reasons for government involvement:<br />

� Creation of employment (flexible),<br />

� to stimulate competition,<br />

� keep the economy on the move,<br />

� Moving the production and distribution chains,<br />

� diversification of products and services market (product variety, flexibility in meeting the unique<br />

needs),<br />

� The important tool to achieve social objectives (eg, regions lagging behind).<br />

The direct role of the state assets:<br />

� Tenders announced form Domestic sources,<br />

� Co-financed by the European Union announced proposals,<br />

� Access to credit assistance programs,<br />

� Capital programmes,<br />

� Guarantees, Collateral security.<br />

The SMEs must be facing during their operations the following problems and challenges (Béza et al, 2007):<br />

� functions of the financial financing,<br />

� low capitalization,<br />

� low risk-bearing capacity,<br />

� chronic underfinancing, liquidity problems,<br />

� economies of scale and transaction costs,<br />

� lack of transparency (lack of clarity),<br />

� insufficient funds.<br />

Credits<br />

Micro Small Medium Total<br />

2009 2010 2009 2010 2009 2010 2009 2010<br />

Credits over the year 336 338 288 308 266 749 257 968 201 685 217 696 804 772 763 972<br />

HUF Loans 189 294 196 060 132 328 138 276 103 288 115 138 424 910 449 474<br />

FX Loans 147 044 92 248 134 421 119 692 98 397 102 558 379 862 314 498<br />

Credits within the year 747 888 785 371 841 678 1 064 941 971 522 1 407 979 2 561 089 3 258 290<br />

HUF Loans 579 571 644 864 635 679 832 928 690 941 929 456 1 906 191 2 407 248<br />

FX Loans 168 317 140 507 205 999 232 013 280 581 478 523 654 897 851 042<br />

Total cerdit at the end<br />

of the year<br />

Loan classificaton<br />

1 496 817 1 407 035 1 286 129 1 349 614 1 237 046 1 286 007 4 019 992 4 042 655<br />

Problem-free 1 018 389 859 003 883 789 841 986 1 059 280 957 333 2 961 458 2 658 322<br />

Special Monitoring 286 905 314 065 201 189 263 036 76 777 190 256 564 871 767 357<br />

Problematic<br />

transaction<br />

191 524 233 966 201 150 244 592 100 989 138 418 493 663 616 977<br />

Below average 54 654 67 173 90 109 64 891 22 499 32 699 167 263 164 763<br />

Doubtful 60 055 86 225 43 468 98 876 43 444 42 739 146 968 227 841<br />

Bad 76 815 80 568 67 573 80 825 35 046 62 980 179 433 224 373<br />

Table 1: Credits for SMEs provided by Loan Banks 2010 (Million HUF)<br />

Source: Hungarian Financial Supervisory Authority<br />

379


The credit of the SMEs has been increased by 0.6% over the previous year in 2010. The credit of the micro<br />

enterprises has been decreased by 6 %, the credit of the small enterprises has been increased by 4.9%. Problematic<br />

tranactions are increased by 25%. Table 1 show the details between 2009 and 2010 by classification.<br />

Lending to small and medium-sized enterprises may be supported by government backed lending schemes. As<br />

of the third quarter of 2010 two new elements were added to the state-backed lending scheme: the existing overdraft<br />

programme (Széchenyi card) was supplemented by a working capital loan programme and an investment loan<br />

programme, both with subsidised interest rates and state guarantees (with the participation of Garantiqa Zrt.). While<br />

the subsidised interest rates encourage borrowing on the demand side, the guarantee provided by the state improves<br />

banks‟ willingness to lend. The efficiency of the programme can be measured by the fact that currently more than 20<br />

per cent of the SME loan portfolio is covered by the guarantee of Garantiqa Zrt. and respectively the counterguarantee<br />

of the state (Figure 1). This implies that the decline in lending would be more pronounced without the<br />

guarantee schemes. Although guarantee programmes need considerable public sources, reallocation to such targets<br />

need to be taken into account, because substantial stimulation of economic growth can be achieved through such<br />

programmes.<br />

Figure 1: Loans outstanding to the SME sector backed the guarantee of Garantiqa Zrt.<br />

Source Hungarian National Bank<br />

The factoring takes 10% of the short term loans gained from banks of small and medium scale companies,<br />

showing that factoring is not a marginal-importance financial service, but a meaningful element of the short term<br />

financing. The Garantiqa Hitelgarancia Zrt. (Garantiqa Credit Warranty Inc.) was founded in 1992, incorporating the<br />

Hungarian state, most important domestic commercial banks, different saving banks and some trade unions of<br />

enterprises. Their aim is to support the operation of the domestic small and medium scale enterprises, by providing<br />

them guaranty that means cash bail and lending such organizations that try to be engaged in the employee<br />

shareholder program. In 2007 the Pénzügyi Szervezetek Állami Felügyelete (PSZÁF, Hungarian Financial<br />

Supervisory Authority) authorised for Garantiqa Credit Warranty Inc. the operation which is equal in terms to the<br />

ones regarding the credit institutions.<br />

The Garantiqa Credit Warranty Inc. fulfils all the requirements that are regarded to the credit institutions,<br />

namely the applied routines of management, control, risk management, needed capital and calculations and these<br />

were appreciated by the Hungarian Financial Supervisory Authority.<br />

With this action the circle of activities of Garantiqa Credit Warranty Inc. became broadened to support the<br />

development of the small and medium scale enterprises. It can provide bail to all kinds of source involvement: bank<br />

warranty, leasing and factoring transactions, risk capital involvement, domestic and EU tenders, applications.<br />

380


Above these they support the borrowing and bond issuing activities of local governments and their enterprises<br />

by providing vouching for them.<br />

With guaranteeing it obligates itself to pay instead of the debtors to the financial institution in case of situations<br />

when the debtor does not fulfil the financial duties. The application of the enterprise willing to request the source,<br />

will be sent by the financial institutions (commercial banks, saving banks, leasing and factoring partners) in a<br />

standardised way to Garantiqa (there is no direct connection between the Credit Warranty Inc. and the clients).<br />

Further possibility for is that the source-requiring company can directly turn to Grantiqa in case of need for tender<br />

warranty related to the enterprise. This activity makes it possible for such local governments and enterprises that<br />

have a realistic business plan to reach sources that they would not be able to or they would not be able to get in any<br />

other ways due to their too risky classification.<br />

Basic aim is to develop the domestic small and medium scale enterprises by spreading and making the factoring<br />

financing available in a much broader circle of clients. For the factoring contract there may be a need for assurance.<br />

Those small and medium scale enterprises that would most need this kind of service are the ones that are lack of the<br />

required assurance. Garantiqa can help them by providing the assurance towards the factoring company.<br />

The delayed turnaround in corporate lending can be mainly attributed to credit supply constraints. In our<br />

forecast in November 2010 we anticipated a turnaround in corporate lending in the first quarter of 2011, with<br />

significant downside risks. According to our updated lending forecast, an upturn (i.e. a net increase in the<br />

outstanding amount) in corporate lending may be expected only around the third to fourth quarter of 2011. Given<br />

that the capacity utilisation of firms may reach its historic average in the first quarter of 2011, a further expansion on<br />

firms‟ export markets or a possible upswing in internal demand may lead to an increase in the demand for<br />

investment loans. The demand of the SME sector for working capital loans may also increase. There is a risk that the<br />

aforementioned duality in the corporate segment will remain persistently. Owing to tight credit supply constraints,<br />

only a part of credit demand is expected to be met, which may cause costs to the real economy.<br />

Figure 2: The turnover of the factoring market in Hungary<br />

About all the four years were analysed we can state that the market shrinking was the same in the bank factoring<br />

as well in the non-bank background factoring companies. The turnover of the non-bank factoring between 2005 and<br />

2009 remained below 230 billion HUF, meanwhile the bank factoring turnover in the analysed period has almost<br />

reached 600 billion HUF. Form 2008 to 2009 the turnover of bank factoring companies decreased from 592 billion<br />

HUF to 536.1 Billion HUF, and in case of the non-bank factoring it dropped from 226.3 billion HUF to 202.7 billion<br />

HUF. The data of the analysed companies in the previous years show an increasing tend at the bank and non-bank<br />

factoring companies as well until 2008. The bank factoring companies realised a higher rate of growth than the nonbank<br />

ones.<br />

Nowadays, the economic and financial crisis is the biggest, most complex and most difficult economic<br />

challenge. In the current economic challenges in their business will stay alive, only companies that are capable and<br />

381


innovative ideas. The approach to the economic challenges can be very diverse, these are rising unemployment,<br />

credit constrained firms / households etc.<br />

The majority of challenges for domestic medium-sized companies are in a changing regulatory environment, as<br />

well as problems that they have a minimum of internal resources on how to make operations more efficient.<br />

(González-Páramo, 2006)<br />

The management structure is determined by the history of companies. Many of the „emerging” companies are<br />

coming from smaller companies, family businesses managed segment. They had good ideas and marketable<br />

products. However, in parallel, not every company has developed the share of responsibilities and tasks, the modern<br />

management approach. It is possible that the leader of a medium-sized companies do not have serious economic and<br />

legal knowledge, and no separate department in the organization, which can continues the important legislative<br />

changes and new business opportunities for example in the taxation.<br />

The Hungarian ambitious medium-sized enterprises often would like to save the tax and do not care about risk<br />

reduction options. Many companies put to focus on the direct benefit instead of the uncertainties inherent in their<br />

operations. Many times, bank financing cause problems. Basically, the medium-sized enterprises are risky for banks.<br />

Of course, the enterprise has to strive to the utmost to fulfil the conditions imposed by banks, in terms of its risks,<br />

however no matter how this will make it. For example, the bank has outstanding for a total restructuring of the<br />

structure group wants to carry less risk to the banking point of view. It could mean that the bank would like to be<br />

preferably in the competence of the group's most profitable company, even if originally this company did not have<br />

belonged to the bank guarantee. (Világgazdaság, 2010)<br />

3. Company liquidations and winding up<br />

It is still a difficult situation for the enterprises, but with a slower increase in liquidations. Liquidations in the first<br />

quarter of 2010 slowed the growth rate over the same period the previous year, but the national average was to lie<br />

behind the sharp regional disparities.<br />

The national figures mask considerable regional disparities are: examining the 2010 first quarter in Western<br />

Hungary it has been decreasing for months, the Central region and in Eastern Hungary is still a growing number of<br />

companies went into liquidation. The evolution of liquidations is robustly supported by the split in the economic<br />

sphere: in the western regions of the company it fell considerably less, while the eastern regions - Central Hungary,<br />

along with - plenty of double-digit increase in the extent of creditors by the launch of the new number of<br />

liquidations. Peak increases are to be seen in Northern Ireland, where 43.1 percent higher the number of insolvent<br />

companies was in a year.<br />

The final settlement terms have not seen such a sharp deviation in most regions, fewer firms were closed in the<br />

previous year. The largest decrease in the Northern Plains region showed 32 per cent. Growth only in Northern<br />

Hungary and the dominant weight were recorded in the region of Central India. The Central region has increased the<br />

fastest in the number of closed firms with 26.7 percent in the previous year.<br />

However, in the country none of the regions increased the number of new companies, except in Southern Great<br />

Plain anywhere in the country we observe two digits (20 percent) as a decline in the January-March period, a year in<br />

retrospect. In the Southern Great Plains 9.8 percent fewer new company is incorporated as a year earlier.<br />

The majority of small and medium-sized enterprises are exposed to exchange rate depreciation, but that<br />

exchange rate risk management techniques are almost unknown to them. The main reason for ignoring FX risks is<br />

that FX risk management tools are thought to be expensive, complicated or ineffective. The majority of enterprises<br />

think there are no suitable tools to manage FX risks or they expect external solutions, such as the introduction of the<br />

euro to decrease their risks. (Bodnár, 2009)<br />

382


4. Material and method<br />

The research has been conducted on the basis of the secondary data. The data was available from the Hungarian<br />

National Bank and Hungarian Financial Supervisory Authority databases.<br />

The analytical trend calculation is the most often used way of the trend calculation. The permanent tendency of<br />

the time series can be expressed by certain well-fitting function. In the course of fitting the function, similarly to the<br />

regression calculation, using the least square method we search for the trend best fitting to the values of the time<br />

series. So the analytical trend is the specific function, where the differences of the square amounts between the<br />

values of the same dates in the time series and the function’s own values is the least. (Szűcs, 2004)<br />

�<br />

( y � yˆ )<br />

2<br />

i i<br />

where yi : the i th power of the time series<br />

ŷi : the value of the trend with the i th (i=1,…n)<br />

date<br />

� min.<br />

During the trend calculation I used the following types of function:<br />

� linear trend,<br />

� exponential trend<br />

The trend calculation is very similar to a regression model with two variables, in which the result variable is the<br />

value of the time series, the explanatory variable is the trend variable representing the progress of time. Now I will<br />

mention two of the differences between the regression analysis with two variables and the analytical trend<br />

calculation (Rappai, 2001):<br />

- while in the regression model theoretically the order of observations can be changed at will, the order in the<br />

trend calculation is defined (determined by the time);<br />

- theoretically the value of the explanatory variable of the regression model is free, the value of the trend variable<br />

in the trend function is usually interpreted on the set of the whole numbers, so the difference between the values<br />

following each other is usually 1.<br />

When choosing a type of trend they were important points of view if I am searching for the reason on the rate of<br />

the pace of increase, and that how long is the time of series I have.<br />

4.1. Linear trend<br />

The basic tendency can be expressed by a linear function if the development of the time series is steady and the<br />

rate of the time change is permanent.<br />

The general form of the linear function:<br />

where ŷ : the value of the trend<br />

x : the values of the time changes equidistant from each other<br />

a and b: the unknown parameters of the function<br />

The aim is to estimate the parameters that can be determined with the standard equations. The standard<br />

equations are the equations where the ( y<br />

2<br />

� yˆ<br />

) � min.<br />

function’s primary partial derivates are equalised<br />

with 0. (Szűcs, 2004)<br />

4.2. Exponential trend<br />

� i i<br />

We use it I the relative change of the examined time series and the pace of the change is about permanent. We often<br />

use it in social-economic time series. A row of the economic growth’s index-numbers, the tendency of the<br />

population or in time of inflation almost all current price index-numbers – in a limited interval – show an<br />

exponential increase (or decrease), that is why they can be described with the exponential trend. (Hunyadi & Vita,<br />

2002)<br />

The equation of the exponential function:<br />

383


x<br />

yˆ � ab<br />

Where a : the trend value belonging to the x=0 period of time,<br />

b : the average pace of time change<br />

x=n+1,n+2,…n+k : n indicates the number of examined periods<br />

5. Results<br />

Figure 3 shows total credits of SME sector within the year between 2001 and 2010. We cannot put a trend line to the<br />

credits over the year, the difference was so high between the examined years in this sector.<br />

Figure 3: Total credits within the year of SME sector from financial institutions (Million HUF)<br />

Source Own writing based on the data from Hungarian Financial Supervisory Authority<br />

In case of the HUF and FX loans I fit an exponential trend on the volume of the credits of the SME sector. The<br />

average yearly trend increase is 14.7% at the total loan.<br />

The exponential trend equation of the total credits within the year is:<br />

y = 708.7e 0.147x<br />

R 2 = 0.888<br />

y = 708.8*1.147 x<br />

The average yearly trend between 2001 and 2010 increase is 14.7% at the HUF loans. The trend value is 708.8<br />

if x = 0. (2001)<br />

The exponential trend equation of the HUF loans is:<br />

y = 578.8e 0.129x<br />

R 2 = 0.847<br />

y = 578.8*1.129 x<br />

The average yearly trend between 2001 and 2010 increase is 12.9% at the FX loans. The trend value is 578.8 if<br />

x = 0. (2001).<br />

The striped column means FX (foreign) loans and its linear trend equations:<br />

y = 79.75x + 34,36<br />

R 2 = 0.918<br />

The fittings of the trends are close (R 2 is 0.918). The average yearly FX increase is 79.75 billion HUF.<br />

384


5 000,000<br />

4 500,000<br />

4 000,000<br />

3 500,000<br />

3 000,000<br />

2 500,000<br />

2 000,000<br />

1 500,000<br />

1 000,000<br />

500,000<br />

0,000<br />

y = 394,1x+ 600,52<br />

R 2 = 0,9285<br />

2001 2002 2003 2004 2005 2006 2007 2008 2009 2010<br />

Total credit Total credit (real data) Lineáris (Total credit)<br />

Figure 4: Total credits of SME sector from financial institutions (Million HUF)<br />

Source Own writing based on the data from Hungarian Financial Supervisory Authority<br />

Figure 4 shows total credits of SME sector between 2001 and 2010. The linear trend is a solid line between<br />

2001 and 2008, the forecast (in a square with broken line) is from 2009 to 2010. The fittings of the trends are close.<br />

The average yearly credit increase is 394.1 Million HUF. We can see big differences between before and after crisis<br />

total credits have been 2900 Million HUF in 2009 and the forecast is 4200 Million HUF. It is a huge go-down.<br />

6. Summary<br />

The micro and small businesses employ half of the employees, medium firms employ 20 percents of employees. The<br />

micro and small enterprises' share of employment increased slightly, the large companies reduced. In 2007, a large<br />

37 percent of the total net sales was generated, in the medium enterprises 20, the micro and small enterprises, 43<br />

percent realized. Then the micro and small enterprises in total exports had 22 and 15. Lending to small and mediumsized<br />

enterprises may be supported by government backed lending schemes. As of the third quarter of 2010 two new<br />

elements were added to the state-backed lending scheme: the existing overdraft programme (Széchenyi card) was<br />

supplemented by a working capital loan programme and an investment loan programme, both with subsidised<br />

interest rates and state guarantees. The demand of the SME sector for working capital loans may also increase.<br />

There is a risk that the aforementioned duality in the corporate segment will remain persistently. Owing to tight<br />

credit supply constraints, only a part of credit demand is expected to be met, which may cause costs to the real<br />

economy.<br />

The average yearly total credit of SME sector increase is 394.1 Million HUF between 2001 and 2008. We can<br />

see big differences between before and after crisis total credits have been 2900 Million HUF in 2009 and the<br />

forecast is 4200 Million HUF. It is huge go-down.<br />

7. References<br />

Szirmai P. (2003): Szemelvenygyőjtemeny a kis- es kozepes vallalkozasok a magyar es nemzetkozi gazdasagban<br />

cimő targyhoz, Kisvallalkozasfejlesztesi Kozpont, Budapest,<br />

Apatini K. (1999): A kis- es kozepvallalkozasok finanszirozasa, KJK Kerszov, Budapest, 1999<br />

Szerb L. (2005): Global Entrepreneurship Monitor 2005 Magyarorszag. A vallalkozoi aktivitast es a vallalkozast<br />

befolyasolo tenyezık alakulasa Magyarorszagon az Europai Unios csatlakozas utan, Pecsi Tudomanyegyetem<br />

Kozgazdasagtudomanyi Kar<br />

Kallay L., Imreh Sz. (2004): A kis- es kozepvallalkozas-fejlesztes gazdasagtana, Aula Kiado, Budapest, 2004<br />

Kallay L. (2002): Paradigmavaltas a kisvallalkozas-fejlesztesben, Kozgazdasagi Szemle, XLIX. evf., 2002. juliusaugusztus,<br />

pp. 557-573.<br />

385


Béza D., Csapó K., Farkas Sz., Filep J., Szerb L. (2007): Kisvállalkozások finanszírozása. Perfekt Kiadó, Budapest.<br />

pp. 27-37.<br />

J. M. González-Páramo (2006): Corporate finance and Monetary policy: the role of small and medium-sized<br />

enterprises. ECB Conference on Corporate Finance and Monetary Policy. 6 p.<br />

Világgazdaság online (2010): KOMOLY KIHÍVÁSOK ÉS MEGOLDÁSOK<br />

http://www.vg.hu/gazdasag/adozas/komoly-kihivasok-es-megoldasok-310044 April 2010<br />

Kohegyi K. (2001): A vallalkozasok finanszirozasa, Cegvezetes, 2001. oktober, pp. 138-149.<br />

Arvai Zs. (2002): A vallalatfinanszirozas uj fejlodesi iranyai, MNB Muhelytanulmanyok (26)<br />

MNB - Hungarian National Bank (2011): Report of financial stability (April 2011). MNB, Budapest. 79. p.<br />

Bodnár K. (2009): Exchange rate ecpossure of Hungarian enterprises. MNB, 2009. Occasional 80. 4 p.<br />

PSZÁF - Hungarian Financial Supervisory Authority (2010): Aggregated data of financial banksector in 2010.<br />

386


387


CORPORATE GOVERNANCE<br />

388


389


CORPORATE GOVERNANCE PRACTICES AND THEIR IMPACT ON FIRM’S CAPITAL<br />

STRUCTURE AND PERFORMANCE; CASE OF PAKISTANI TEXTILE SECTOR<br />

Prof. Dr. Hayat M. Awan. Khuram Shahzad Bukahri & Rameez Mahmood Ansari,<br />

Bahauddin Zakariya University Multan, Pakistan.<br />

Abstract<br />

Manuscript Type: Empirical.<br />

Research Question/Issue: This paper investigates the corporate governance practices currently practiced in Pakistani firms<br />

and the relationship between internal corporate governance structures and capital structure and internal corporate<br />

governance structures and firm performance in Pakistani Textile sector.<br />

Research Findings/Insights: The study used a sample of 100 manufacturing companies listed at Karachi Stock Exchange<br />

(KSE) and Lahore Stock Exchange (LSE) from Textile sector. Regression Analysis and Structural Equation Modeling are<br />

used to determine the relationship between internal corporate governance structures and capital structure and internal<br />

corporate governance structures and firm performance. In finding the relationship of internal corporate governance<br />

structures with capital structures, significant relationship was found with No of Board of Directors, Board Composition,<br />

CEO Duality, Board Skill, Audit Committee Composition, Chairman Independent or Non-executive, CFO attend All Board<br />

Meetings and Separate Management Consultants. In finding the relationship of internal corporate governance structures<br />

with firm performance, significant relationship was found with CEO Duality, Audit Committee Composition, CFO attend<br />

All Board Meetings and Company Chairman in Audit Committee. Findings also suggested that all internal corporate<br />

governance structures taken together impact capital structure negatively and firm performance positively.<br />

Theoretical/Academic Implications: The findings support the theories of corporate governance. Our findings are consistent<br />

with the theoretical models that good corporate governance practices lead to decrease in capital structure and improved<br />

performance. Future research should include both internal and external corporate governance structures and find their<br />

relationship with capital structure and firm performance or find relationship with different proxies of capital structure or<br />

firm performance.<br />

Practitioner/Policy Implication: Our results stress on the importance of corporate governance practices for firms to reduce<br />

their debt structure and improve their performance. And if firms want to get more capital and improve investor’s<br />

confidence, they would have to improve their corporate governance practices.<br />

Keywords: Corporate Governance, Capital Structure, Firm Performance<br />

1. Introduction<br />

Mechanisms that protect the interests of the shareholders are known as Corporate Governance mechanisms.<br />

Good corporate governance helps in economic development. Last two decades have seen an increasing<br />

intensity of research on the subject of corporate governance. Firms having weaker governance structures<br />

face more agency problems and managers of those firm’s get more private benefits, due to weak<br />

governance structures (Core et al., 1999). According to Chuanrommanee and Swierczek (2007), corporate<br />

governance practices in financial corporations of the ASEAN countries are consistent with the international<br />

practices. Corporate Governance has become one the important research area in Pakistan after publication<br />

of SECP Corporate Governance Code 2002, for publicly listed companies. The code was met with a lot of<br />

criticism in the start and there was lot of difficulties in implementing and enforcing it. However, despite<br />

these criticisms, the Code has been the reason for the start of a new era of corporate governance in<br />

Pakistan. Rais and Saeed (2005) argued that the acceptance of Corporate Governance Code has improved<br />

overall structure of the corporation and environment of the businesses by ensuring transparency and<br />

accountability in reporting framework.<br />

Corporate failures such as Enron, WorldCom, One-Tel, Ansett, Parmalat etc have awaken the need to<br />

strengthen corporate governance practices not only in the developed world but also in the developing<br />

world. Most of the corporate governance research has been done on the developed economies (Rajagopalan<br />

and Zhang, 2008). And limited researches are there to address the issues related to corporate governance<br />

that can be applicable to the developing economies. Pakistani corporations are family-controlled from the<br />

past. Ownership is very concentrated. The families control the firms through pyramidal and tunneling<br />

ownership structures. Ghani and Ashraf (2005) argued that families manage business groups in Pakistan,<br />

which contains combinations of different business entities. They also suggested that firms having external<br />

shareholders affiliated with these business groups have weaker corporate governance mechanisms and are<br />

390


less transparent. And due to this, market value of those firms is discounted without even the concern that<br />

they are producing greater profits. Institutional framework has to be strengthened in order to improve the<br />

corporate governance mechanisms of a country.<br />

The main focus of this paper is to find the corporate governance practices currently practiced in<br />

Pakistani firms. And to find the relationship between corporate governance and capital structure and<br />

corporate governance and firm performance in Pakistani textile sector by using KSE and LSE listed firms.<br />

Several studies have tested the hypothesis of finding relationship between characteristics of corporate<br />

governance and capital structure and between characteristics of corporate governance and performance.<br />

However, very few studies have in conducted in context of Pakistan or Pakistani Listed Firms and are<br />

limited in finding the relationship with few characteristics and structures of Corporate Governance.<br />

This paper is organized as follows: the next section provides literature review and development of<br />

hypothesis. The fourth section describes the methodology used. The penultimate section discusses the<br />

results. Finally, the last section concludes the results and concludes the discussion.<br />

2. Literature Review and Hypothesis Development<br />

2.1. Corporate Governance and Capital Structure<br />

According to the modern theories, agency cost is one of the main components of capital structure and<br />

corporate governance mechanisms reduce agency problems, therefore both are linked. Claessens et al.<br />

(2002) argues that good corporate governance mechanisms help firms through better access to financing<br />

and lower cost of capital.<br />

The boards of directors are responsible for managing the overall firm and firm operations. They play a<br />

vital role in deciding about the financial mix. A significant relationship is found between capital structure<br />

and board size by Pfeffer and Salancick (1978). Berger et al. (1997) found that large board of directors<br />

have low leverage levels and larger boards also exert pressure to enhance firm performance. On the other<br />

side, Jensen (1986) states that high leverage levels have large boards. Wen et al. (2002) found positive<br />

relationship between capital structure and board size. And large board size is associated with higher debt<br />

levels. Abor (2007) found a positive correlation between board size and capital structure.<br />

Non executive directors are essential part of modern corporate governance mechanisms. Pfeffer and<br />

Salancick (1978) states that presence of non executive directors reduces uncertainties about the company<br />

and help in raising capital. And presence of higher numbers of non executive directors lead to higher<br />

leverage levels. Jensen (1986) and Berger et al. (1997) also found the same, those companies having high<br />

leverage levels have relatively more external directors. Abor (2007) concluded that there is a positive<br />

correlation between board composition and capital structure. On the other side, Wen et al. (2002)<br />

concluded that a significant negative relationship between leverage levels and number of external directors.<br />

And that the presence of external directors leads to low leverage levels.<br />

CEO/Chair duality is also one of the important feature of corporate governance. This can directly effect<br />

the capital structure decisions of the company. Fama and Jensen (1983) argue that role of CEO and<br />

chairman should be separate, as CEO is the chief decision management authority and chairman is the chief<br />

decision control authority. Fosberg (2004) found that separate CEO and chairman have higher leverage<br />

levels and results in optimal amount of debt. Abor (2007) concluded that there is a positive correlation<br />

between CEO duality and capital structure.<br />

2.2. Corporate Governance and Firm Performance<br />

Black et al. (2006) concludes that firm’s having higher governance scores have a high market value. Chen<br />

(2008) suggested that establishing effective governance mechanisms leads to improvement in firm’s value.<br />

Harford et al. (2008) concluded that poorly governed firm’s destroy firm’s value.<br />

391


Mintzberg (1983) and Kosnik (1990) argue that large board size negatively influences performance of<br />

firm’s. Lipton and Lorsch (1992) and Jensen (1993) argue that larger board size turns less effective because<br />

of poorer communication and decision making process. Van den Berghe and Levrau (2004) argue that<br />

increasing the number of directors increase expertise, knowledge and skills than smaller boards. A high<br />

negative relationship between performance of firm and board size was found by De Andres, Azofra and<br />

Lopez (2005). Analysis by Dalton and Dalton (2005) found that superior performance resulted from larger<br />

boards while Bhagat and Black (1999) and Hermalin and Wiesbach (2003) proposed an opposite view.<br />

Brown and Caylor (2006) showed firms with board size in between 6 and 15 yield higher returns on equity.<br />

Jackling and Johl (2009) found that large board size impacts performance positively.<br />

Baysinger and Butler (1985) found that board with more outside directors performed better than other<br />

firms. Fosberg (1989) found no relationship between proportion of outside directors and firm’s<br />

performance. Rosenstein and Wyatt (1990) found that there is slight increase in stock prices when more<br />

outside directors are appointed by firms. Hermalin and Weisbach (1991) found no relationship between<br />

board composition and firm value. Yermack (1996) showed that proportion of non-executive directors does<br />

not affect performance of firm significantly. Agrawal and Knoeber (1996) while conducting a study on US<br />

firms found negative relationship between proportion of outside directors and performance of firms.<br />

Shrader et al. (1997) found negative relationship between proportion of women on board and firm’s value.<br />

Bhagat and Black (1999) found no significant between board independence and firm’s performance in a<br />

long run in case of US firms. Roberts et al. (2005) suggests that the active participation of an independent<br />

director brings to team, independent ideas. And can be helpful in better functioning the organization and<br />

board. In support of the above assumption, Brown and Caylor (2006) found that firms with more<br />

independent directors performed well than others with higher ROE, greater profits, more dividends and<br />

higher repurchase of stock. And the most important factor in the above study affecting firm’s performance<br />

was independence of directors. Chan and Li (2008) found that firm value in enhanced by presence of<br />

expert-independent directors in board. Jackling and Johl (2009) found that large number of independent or<br />

outside directors on board impacts performance positively.<br />

Rechner and Dalton (1991) found that firms with CEO duality performed better. Daily and Dalton<br />

(1992), while conducting a study on entrepreneurial firms, found no relationship between CEO duality and<br />

performance of firm’s. Peel and O’Donnell (1995) showed that splitting both roles lead to improved<br />

performance. Yermack (1996) argued that those firms are more valuable which have separate CEO and<br />

chairman position. Brickley et al. (1997) concluded that CEO duality doesn’t leads to inferior performance.<br />

Sanda et al. (2003) also found a positive relationship between separate CEO and chairman positions and<br />

firm’s performance. Brown and Caylor (2006) also concluded the same that when CEO and chairman<br />

positions are separate, firms are more valuable. Kang and Zardkoohi (2005) concluded regarding studies of<br />

the relationship between CEO/Chair duality and other measures that the results are complex. They<br />

proposed that if such duality exists as a reward, it might result in positive performance. But if the reason is<br />

to increase the CEO’s power than it may have a negative affect on the performance of the firm. Elsayed<br />

(2007) concluded that CEO duality has no effect on performance. But it varies from industry to industry.<br />

Uzun et al. (2004) found that higher audit committee independence resulted in a lower chance of fraud.<br />

So, audit committee reduces agency cost and improves over all performance. Brown and Caylor (2006)<br />

found positive relationship between dividend yield and independent audit committees. But found no<br />

relationship between independent audit committee and performance. Chan and Li (2008) found that firm<br />

value in enhanced by presence of expert-independent directors in audit committee.<br />

Lybaert (1998) stated that better performance of firms is due to higher and better level of education<br />

among entrepreneurs. On the other side, Powel (1991) stated that occupational and professional affiliations<br />

of qualified managers with firms may have a negative effect. Lawrie (1998) stated that the gaps in<br />

management expertise are considered less as an barrier in development of SME.<br />

What happens at meetings of board and how many directors attend those meetings tells shareholders,<br />

how seriously the governance responsibilities are met. According to Lipton and Lorsch (1992), more<br />

meetings of board results in improved performance. A positive association exists between frequency of<br />

392


meetings and performance of firm and also in between the attendance of directors and performance of firm<br />

(Brown and Caylor, 2006).<br />

3. Methodology<br />

The study is a quantitative approach, basically an exploratory type of study. Textile sector is used as the<br />

main population of study. KSE and LSE listed firms is used in the sample, selected on the basis of response<br />

of firms on Corporate Governance structures and availability of secondary data on the other required<br />

variables. Non-probability sampling is used to select firms. A sample of 150 firms was taken in the start,<br />

but final sample include 100 firms, which were having complete data of all internal corporate governance<br />

structures and financial data of period from 2005 to 2009. Data on required variables is collected through<br />

primary and secondary sources. Data on internal Corporate Governance structures are collected through<br />

self administrated survey, mail survey, phone survey, interviews and annual reports. Data related to<br />

financial part of the study is collected from Annual Reports. Regression analysis and structural equation<br />

modeling is used.<br />

3.1. Capital Structure Measure<br />

Debt ratio is used as the proxy of capital structure. It is measured by dividing total debt by total assets.<br />

Ratio of debt to assets was used in the past by Short and Keasey (1999) and Holderness et al. (1999) in<br />

their studies.<br />

3.2. Firm Performance Measure<br />

Return on Assets is used as proxy for performance. It is measured by dividing net earnings by total assets.<br />

Muth and Donaldson (1998) and Erhardt et al. (2003) have used ROA in measuring firm performance in<br />

their studies.<br />

3.3. Corporate Governance Measures<br />

No. Of Board Meetings, Board Composition, Board Size, CFO Attend All Board Meetings, Directors In<br />

Audit Committee, CEO Duality, Board Skill, Chairman Of Audit Committee is Non-executive or not, No<br />

Of Audit Committee Meetings In A Year, Is Company Chairman In Audit Committee, No Of Audit<br />

Committee Members, Is Chairman Of Company Independent or Non-executive, Audit Committee<br />

Composition and Are Separate Management Consultants hired or not are used as the measures of Corporate<br />

Governance.<br />

Table 1 – Measures of Corporate Governance<br />

1. Number of Board Meetings is measured as total number of board meetings.<br />

2. Board Composition is measured by dividing number of independent or non-executive directors by total number of board of<br />

directors.<br />

3. Board Size is measured as total number of board of directors.<br />

4. CFO attend All Board Meetings is 1 if CFO attends all board meetings and 0 if otherwise.<br />

5. Directors in Audit Committee are measured by dividing number of directors in audit committee by total number of audit<br />

committee members.<br />

6. CEO Duality is 1 if CEO is chairman as well and 0 if otherwise.<br />

7. Board Skill is measured by dividing number of directors having professional degree or qualification by total number of board of<br />

directors.<br />

8. Chairman of Audit Committee is Non-executive is 1 if chairman of audit committee is non-executive and 0 if otherwise.<br />

9. No of Audit Committee Meetings in a Year is measured as total number of audit committee meetings in a year.<br />

10. Company Chairman in Audit Committee is 1 if company chairman is in audit committee and 0 if otherwise.<br />

11. No of Audit Committee Members is measured as total number of audit committee members.<br />

12. Chairman of Company Independent or Non-executive is 1 if chairman of company is independent or non-executive and 0 if<br />

otherwise.<br />

13. Audit Committee Composition is measured by dividing number of independent or non-executive members by total number of<br />

audit committee members.<br />

14. Separate Management Consultants is 1 if separate management consultants are hired by company and 0 if otherwise.<br />

393


3.4. Control Variables<br />

Age of firm and size of firm is used as control variables. Age of Firm is measured as logarithm of number<br />

of years between the observation year and the year of incorporation. Age of firm has been used as a control<br />

variable in various studies like Li and Simerly (1998), Dimelis and Louri (2002) and Gedajlovic et al.<br />

(2005). To measure age of firm, number of years from the time of listing was used by Drobetz et al. (2004).<br />

Size of firm is measured as logarithm of book value of assets. Logarithm of book value of assets is populist<br />

in most of the researches, like in the studies of Renneboog (2000), Bauer et al. (2004) and Drobetz et al.<br />

(2004).<br />

4. Results<br />

4.1. Corporate Governance Practices<br />

81% of organizations were having 7 directors serving on their board, 97% of organizations were having<br />

independent or non-executive directors, 25% of organizations were having 4 independent or non-executive<br />

directors, 93% of organizations were not having representatives of minority shareholders, 5% of<br />

organizations were having 1 representative of minority shareholders, 70% of organizations were not having<br />

independent or non-executive chairman, 55% of organizations were not having the same individual,<br />

holding the posts of Chairman and Chief Executive. In all organizations functions of Chairman were clearly<br />

defined by Board of Directors, in all organizations functions of CEO were clearly defined by Board of<br />

Directors, all organizations were having Statement on Ethics and Ethical Business Practices, all<br />

organizations were having terms of appointment and remuneration package of CEO and the Executive-<br />

Directors approved by the BOD. 46% of organizations were having 4 meetings of the BOD in the past 12<br />

months, 38% of organizations were having 5 members of board holding professional degree’s, 50% of<br />

organizations CFO did not attend the board meetings, all of them were having degrees in accounting,<br />

economics and business administration. But most of them were CA, ACA and FCA. Company Secretary of<br />

all organizations attend all board meetings, all were having degrees in accounting, economics and business<br />

administration. But most of them were Masters, MBA and CA. In all organizations, external auditors or<br />

his/her spouse or children were not holding shares in company. All organizations were having an audit<br />

committee, 91% of organizations were having 3 members in their audit committee, 95% of organizations<br />

were having 3 directors in their audit committee, 73% of organizations were not having company chairman<br />

as members in their audit committee, 57% of organizations were having 2 non-executive members in audit<br />

committee, 61% of organizations were having the non-executives chairman of their audit committee, 74%<br />

of organizations were having 4 meetings of Audit Committee in past 12 months, all organizations CFO<br />

attend meetings of the Audit Committee. All organizations were having an internal auditing department,<br />

head of internal auditing department of all organizations were having direct access to Chairman Audit<br />

Committee, all organizations were having external auditors, appointed as per prescribed guidelines, 57% of<br />

organizations were having separate Management Consultants, hired to advice.<br />

4.2. Corporate Governance and Capital Structure<br />

Model<br />

Table 3 – Corporate Governance impacting Debt Ratio<br />

Unstandardized Coefficients<br />

B Std. Error Beta<br />

Standardized<br />

Coefficients t Sig.<br />

(Constant) .641 .320 2.001 .046<br />

Size Of Firm -.073 .018 -.205 -4.179 .000<br />

Board Size .070 .015 .226 4.615 .000<br />

Board Composition .341 .059 .359 5.752 .000<br />

CEO Duality .047 .024 .120 1.977 .049<br />

Board Skill .182 .067 .144 2.705 .007<br />

394


Audit Committee Composition -.230 .064 -.284 -3.597 .000<br />

Chairman Independent or Non-executive -.059 .029 -.139 -2.027 .043<br />

CFO Attend All Board Meetings -.032 .019 -.083 -1.672 .095<br />

Separate Management Consultants -.056 .020 -.144 -2.853 .005<br />

Dependent Variable: Debt Ratio<br />

R Square= 0.243 S.E= 17.2 F=7.486 Sig.= 0.000<br />

In finding the relationship of Corporate Governance structures with Debt ratio, which is the dependent<br />

variable and used as the proxy of Capital structure, while Size of firm and Age of firm are used as control<br />

variables, the value of R square indicate that 24.3 percent variation is explained by the independent<br />

variables and the remaining 75.7 percent is due to some other factors. ANOVA table indicate that overall<br />

regression is significant. Results show that Board Size, Board Composition, CEO Duality, Board Skill,<br />

Directors In Audit Committee, Company Chairman In Audit Committee, Chairman Of Audit Committee<br />

Non-executive and No Of Audit Committee Meetings In A Year are positively related to capital structure.<br />

And Audit Committee Composition, Chairman Independent or Non-executive, No Of Board Meetings,<br />

CFO Attend All Board Meetings, No Of Audit Committee Members and Separate Management<br />

Consultants are negatively related to capital structure. But significant relationship was found with Board<br />

Size, Board Composition, CEO Duality, Board Skill, Audit Committee Composition, Chairman<br />

Independent or Non-executive, CFO Attend All Board Meetings and Separate Management Consultants.<br />

And non-significant relationship was found with No of Board Meetings, No of Audit Committee Members,<br />

Directors in Audit Committee, Company Chairman in Audit Committee, Chairman of Audit Committee<br />

Non-executive and No of Audit Committee Meetings in a Year.<br />

4.3. Corporate Governance and Return on Assets<br />

Model<br />

Unstandardized Coefficients<br />

B Std. Error Beta<br />

Standardized<br />

Coefficients t Sig.<br />

(Constant) .041 .141 .293 .770<br />

Debt Ratio -.181 .023 -.409 -7.975 .000<br />

CEO Duality -.019 .010 -.111 -1.828 .068<br />

Audit Committee Composition -.051 .029 -.141 -1.777 .076<br />

CFO Attend All Board Meetings -.041 .008 -.237 -4.779 .000<br />

Company Chairman In Audit Committee .026 .010 .134 2.625 .009<br />

Dependent Variable: Return On Assets<br />

R Square= 0.259 S.E= 0.07543 F= 7.638 Sig.= 0.000<br />

Table 4 – Corporate Governance impacting Return on Assets in presence of Debt Ratio<br />

In finding the relationship of Corporate Governance structures with Return on Assets, which is the<br />

dependent variable and used as the proxy of Firm Performance, while Debt ratio, Size of firm and Age of<br />

firm used as control variables, the value of R square indicate that 25.9 percent variation in is explained by<br />

the independent variables and the remaining 74.1 percent is due to some other factors. ANOVA table<br />

indicate that overall regression is significant. Results show that Board Size, Board Composition, Board<br />

Skill, Company Chairman In Audit Committee and Separate Management Consultants are positively<br />

related to capital structure. And CEO Duality, Audit Committee Composition, Chairman Independent or<br />

Non-executive, CFO Attend All Board Meetings, No Of Audit Committee Members, Directors In Audit<br />

Committee and Chairman Of Audit Committee Non-executive are negatively related to capital structure.<br />

No relationship was found with No Of Board Meetings and No Of Audit Committee Meetings In A Year.<br />

Significant relationship was found with Debt ratio, CEO Duality, Audit Committee Composition, CFO<br />

Attend All Board Meetings and Company Chairman in Audit Committee. Non-significant relationship was<br />

found with Size of Firm, Age of Firm, Board Size, Board Composition, Board Skill, Chairman Independent<br />

or Non-executive, No of Board Meetings, No of Audit Committee Members, Directors in Audit<br />

395


Committee, Chairman of Audit Committee Non-executive, No of Audit Committee Meetings in a Year and<br />

Separate Management Consultants.<br />

4.4. Corporate Governance Impacting Debt Ratio and ROE<br />

Figure 1 - Corporate Governance Impacting Debt Ratio and ROA<br />

Chi-Square = 741.47 df=101 p-value = 0.00000 RMSEA = 0.13<br />

The data fits the model well and model is significant. Corporate Governance structures taken together<br />

significantly impact Capital Structure negatively. Corporate Governance structures taken together impact<br />

Return on Assets positively and Debt ratio impacts the Return on Assets negatively. But impact of Debt<br />

ratio is greater in comparison to Corporate Governance measures. And impact of Corporate Governance<br />

becomes slightly greater in one-to-one relationship, as compared to the results in presence of Debt ratio.<br />

5. Conclusion and Discussion<br />

Corporate Governance has become one the important research area in Pakistan after publication of SECP<br />

Corporate Governance Code 2002, for publicly listed companies. Corporate failures such as Enron,<br />

WorldCom, One-Tel, Ansett, Parmalat etc have awaken the need to strengthen corporate governance<br />

practices not only in the developed world but also in the developing world. Pakistani corporations are<br />

family-controlled from the past. Ownership is very concentrated. The families control the firms through<br />

pyramidal and tunneling ownership structures. The relationship of variables of corporate governance has<br />

been widely researched but very little variables have been taken. In this research a large number of<br />

corporate governance Structures are taken and their relationship is found with capital structure and firm<br />

performance.<br />

In finding the relationship of Corporate Governance structures with Debt ratio, while Size of Firm and<br />

Age of Firm used as control variables, Negative relationship was present in Size of Firm and Capital<br />

Structure, the reason could be, as the size of the firm increase, the size of the assets increase and the<br />

creditworthiness of the organization increase and cost of raising debt decrease and overall cost of debt also<br />

decreases and it is particularly true about the textile sector because of nature of ownership, which is<br />

predominantly closely held with the family or kinship within the family. Positive relationship was present<br />

in No of Board of Directors and Capital Structure as found by Wen et al. (2002) and Abor (2007), the<br />

reason could be, as the no of board members increase, the capability to raise debt increase due to joint<br />

investment portfolio of the directors and greater number of board of directors put pressure on managers,<br />

through strict monitoring to use more debt in order to increase the firm’s value. Positive relationship was<br />

present in Board Composition and Capital Structure as found by Jensen (1986), Berger et al. (1997) and<br />

Abor (2007), the reason could be, as the number of independent director’s increase, the creditworthiness of<br />

the organization increase and decision making of management becomes better and more board members<br />

could have those useful links on financing facilities, so more debt is used in comparison to equity. Positive<br />

relationship was present in CEO Duality and Capital Structure as found by Abor (2007), the reason could<br />

396


e as the control increase, there would be active involvement and no conflict between the CEO and<br />

chairman and better allocation of resources and effective strategy about use of debt. Positive relationship<br />

was present in Board Skill and Capital Structure, the reason could be, as there would more educated<br />

directors, the board will be more effective and decision making of management becomes better and they<br />

can raise more at low cost debt as compared to equity by having knowledgeable and useful links. Negative<br />

relationship was present in Audit Committee Composition and Capital Structure, Chairman Independent or<br />

Non-executive and Capital Structure, CFO attend All Board Meetings and Capital Structure and Separate<br />

Management Consultants being hired and Capital Structure, the reason could be, as there will be more<br />

independent or non-executives, CFO attending all board meetings and separate management consultants<br />

being hired, there would be strict monitoring and regulatory mechanisms due to negative orientation<br />

towards more debt in comparison to assets and positive orientation towards stable cash flows and they<br />

would influence the board to increase the assets as well in comparison to debt.<br />

In finding the relationship of Corporate Governance structures with Return on assets, while Debt ratio,<br />

Size of Firm and Age of Firm used as control variables, Negative relationship was present in Debt ratio and<br />

Return on Assets , the result is consistent with many studies done in the developing countries and the<br />

reason could be that costs like interest payments, bankruptcy costs and agency costs are greater than tax<br />

benefits of using more debt and can also be attributed increase in rate of interest in last few years. Negative<br />

relationship was present in CEO Duality and Return on Assets as found by as found by Sanda et al. (2003),<br />

the reason could be as CEO is chairman as well, he/she is not able manage both duties actively at the same<br />

time and due to burden of duties, performance is affected badly. Negative relationship was present in Audit<br />

Committee Composition and Return on Assets and CFO attend All Board Meetings and Return on Assets,<br />

the reason could be, as there would be more independent directors or CFO attending all meetings, they will<br />

influence the board to invest less in the risky projects and would prefer more stable projects and would<br />

result in lower returns. Positive relationship was present in Company Chairman in Audit Committee and<br />

Return on Assets, the reason could be, as the company chairman will be there in the audit committee, he<br />

will serve as monitoring mechanism to the decisions of the board and he will keep close eye on the<br />

financials of the company and would deal with discrepancies in time.<br />

Corporate Governance structures taken together significantly impact Capital structure negatively. If<br />

Corporate Governance becomes strong, Debt ratio will decrease, which is true as there will be more<br />

Corporate Governance structures, there would be strict monitoring and regulatory mechanisms and to<br />

protect the interests of the shareholders, managers will pursue the policy of lower debt levels, in order to<br />

mitigate extra risks related to with higher debt levels. Corporate Governance measures taken together<br />

impacts Return on Assets positively and Debt ratio impacts the Return on Assets negatively. If Corporate<br />

Governance becomes strong, Return on Assets will increase, which is true as there will strong corporate<br />

governance structures, there will be strict monitoring and regulatory mechanisms, effective accounting<br />

standards and better control systems, that will result in better utilization of firm’s resources and improved<br />

Return on Assets.<br />

Overall, there is relationship between Corporate Governance structures and Capital Structure and<br />

Corporate Governance structures and Firm Performance. In some cases, it is positive and in other cases it is<br />

negative. In some cases, it is highly significant and in other cases it is slightly significant. If all Corporate<br />

Governance structures are taken together, it impacts Capital Structure negatively and Return on Assets<br />

positively.<br />

The above study has outlined the internal corporate governance structures and their relationship with<br />

capital structure and firm performance. It adds to the literature and opens new avenues for future studies, in<br />

which both internal as well as external corporate governance structures can be taken and than their<br />

relationship can be found with different proxies of capital structure and firm performance. And due to lack<br />

of time and large number of firms, only one sector was taken, future studies can incorporate different<br />

sectors and can also find differences in those sectors related to corporate governance structures and their<br />

relationship with capital structure and firm performance.<br />

397


6. References:<br />

Abor, J. 2007. Corporate governance and financing decisions of Ghanaian listed firms. Corporate<br />

Governance, 7: 83-92.<br />

Agrawal, A. & Knoeber, C. R. 1996. Firm performance and mechanisms to control agency problems<br />

between managers and shareholders. Journal of Financial and Quantitative Analysis, 31:377-397.<br />

Bauer, R., Gunster, N. & Otten, R. 2004. Empirical evidence on corporate governance in Europe: The<br />

effect on stock returns, firm value and performance. The Journal of Asset Management, 5:91-104.<br />

Baysinger, B. D. & Butler, H. N. 1985. Corporate governance and board of directors performance effects of<br />

changes in board composition. Journal of Law Economics and Organization, 1:101-124.<br />

Berger, P. G., Ofek, E. & Yermack, D. L. 1997. Managerial Entrenchment and Capital Structure Decisions.<br />

Journal of Finance, 52:1411-1438.<br />

Bhagat, S. & Black, B. 1999. The uncertain relationship between board composition and firm performance.<br />

The Business Lawyer, 3:921-963.<br />

Black, B. S., Hasung, J. & Woochan, K. 2006. Does Corporate Governance Predict Firms’ Market Values?<br />

Evidence from Korea. Journal of Law, Economics and Organization, 22:366-413.<br />

Brickley, J. A., Coles, J. L. & Jarrell, G. 1997. Leadership Structure: Separating the CEO and Chairman of<br />

the Board. Journal of Corporate Finance, 3:189-220.<br />

Brown, L. D. & Caylor, M. L. 2006. Corporate Governance and Firm Valuation. Journal of Accounting and<br />

Public Policy, 25:409-434.<br />

Chan, K. C. & Li, J. 2008. Audit Committee and Firm Value: Evidence on Outside Top Executives as<br />

Expert-Independent Directors. Corporate Governance, 16:16-31.<br />

Cheema, A., Bari, F. & Saddique, O. 2003. Corporate Governance in Pakistan:Ownership, Control and the<br />

Law. Lahore University of Management Sciences, Lahore.<br />

Chen, Y. 2008. Corporate governance and cash holdings: listed new economy versus old economy firms.<br />

Corporate Governance, 16:430-442.<br />

Chuanrommanee, W. & Swierczek, F. W. 2007. Corporate Governance in ASEAN Financial Corporations:<br />

reality or illusion? Corporate Governance, 15:272-283.<br />

Core, J. E., Holthausen, R. W. & Larcker, D. F. 1999. Corporate governance, chief executive officer<br />

compensation, and firm performance. Journal of Financial Economics, 51:371-406.<br />

Claessens, S. & Fan, J. P. H. 2002. Corporate Governance in Asia: A Survey. International Review of<br />

Finance, 3:71-103.<br />

Daily, C. M. & Dalton, D. R. 1992. The Relationship Between Governance Structure and Corporate<br />

Performance in Entrepreneurial Firms. Journal of Business Venturing, 7:375-386.<br />

Dalton, C. M. & Dalton, D. R. 2005. Boards of directors: Utilizing empirical evidence in developing<br />

practical prescriptions. British Journal of Management, 16:91-97.<br />

De Andres, P., Azofra, V. & Lopez, F. 2005. Corporate boards in OECD countries: Size, composition,<br />

functioning and effectiveness. Corporate Governance, 13:197-210.<br />

Dimelis, S. & Louri, H. 2002. Foreign Ownership and Production Efficiency: a Quantile Regression<br />

Analysis. Oxford Economic Papers, 54:449-469.<br />

Drobetz, W., Schillhofer, A. & Zimmermann, H. 2004. Corporate governance and expected stock returns:<br />

evidence from Germany. European Financial Management, 10:267-293.<br />

Elsayed, K. 2007. Does CEO duality really affect corporate performance? Corporate Governance, 15:1203-<br />

1214.<br />

Erhardt, N. L., Werbel, J. & Sharder, C. B. 2003. Board of director diversity and firm financial<br />

performance. Corporate Governance, 11: 102-111.<br />

Fama, E. F. & Jensen, M. 1983. Separation of Ownership and Control. Journal of Law and Economics,<br />

26:301-325.<br />

398


Fosberg, R. 1989. Outside directors and managerial monitoring. Akron Business and Economic Review,<br />

20:24-32.<br />

Fosberg, R. H. 2004. Agency problems and debt financing: leadership structure effects. Corporate<br />

Governance: International Journal of Business in Society, 4:31-38.<br />

Gedajlovic, E., Yoshikawa, T. & Hashimoto, M. 2005. Ownership structure, investment behavior and firm<br />

performance in Japanese manufacturing industries. Organization Studies, 26:7-35.<br />

Ghani, W. I. & Ashraf, J. 2005. Corporate Governance, Business Group Affiliation and Firm Performance:<br />

Descriptive Evidence from Pakistan. CMER Working Paper No. 05-35.<br />

Hasan, A. & Butt, S. A. 2009. Impact of Ownership Structure and Corporate Governance on Capital<br />

Structure of Pakistani Listed Companies. International Journal of Business and Management, 4:50-57.<br />

Harford, J., Mansi, S. A. & Maxwell, W. F. 2008. Corporate governance and cash holdings. Journal of<br />

Financial Economics, 87:535-555.<br />

Hermalin, B. E. & Weisbach, M. S. 1991. The effects of board composition and direct incentives on firm<br />

performance. Financial Management, 20:101-112.<br />

Hermalin, B. E. & Weisbach, M. S. 2003. Boards of directors as an endogenously determined institution: A<br />

survey of the economic literature. Economic Policy Review-Federal Reserve Bank of New York, 9:7-<br />

26.<br />

Holderness, C., Kroszner, R. & Sheehan, D. 1999. Were the Good Old Days That Good? The Evolution of<br />

Managerial Stock Ownership Since the Great Depression. Journal of Finance, 54:435-469.<br />

ICAP MIES-5. Available at www.icap.org.pk/mies/mies5.pdf (Accessed on 10/08/2010)<br />

Jackling, B. & Johl, S. 2009. Board Structure and Firm Performance: Evidence from India’s Top<br />

Companies. Corporate Governance: An International Review, 17:492-509.<br />

Javid, A. Y. & Iqbal, R. 2010. Corporate Governance in Pakistan: Corporate Valuation, Ownership and<br />

Financing. PIDE Working Papers 2010:57.<br />

Jensen, M. C. 1986. Agency costs of free cash flow, corporate finance and takeovers. American Economic<br />

Review, 76:323-329.<br />

Jensen, M. C. 1993. The Modern Industrial Revolution, Exit, and the Failure of Internal Control Systems.<br />

Journal of Finance:831-880.<br />

Kang, E. & Zardkoohi, A. 2005. Board leadership structure and firm performance. Corporate Governance,<br />

13:785-799.<br />

Kosnik, R. D. 1990. Effects of board demography and directors’ incentives on corporate greenmail<br />

decisions. Academy of Management Journal, 33:129-150.<br />

Lawrie, A. 1998. Small Firms Survey: Skills. British Chambers of Commerce, London.<br />

Li, M. & Simerly, R. Y. 1998. The moderating effect of environmental dynamism on the ownership and<br />

performance relationship. Strategic Management Journal, 19:169-179.<br />

Lipton, M. & Lorsch, J. 1992. A Modest Proposal for Improved Corporate Governance. The Business<br />

Lawyer, 48:59-77.<br />

Lybaert, N. 1998 The information use in a SME: its importance and some elements of influence. Small<br />

Business Economics, 10:171-191.<br />

Mintzberg, H. 1983. Power in and around organizations. Englewood Cliffs: Prentice Hall.<br />

Mir, S. & Nishat, M. 2004. Corporate Governance Structure and Firm Performance in Pakistan: An<br />

Empirical Study. Paper presented at Second Annual Conference in Corporate Governance. Lahore<br />

University of Management Sciences, Lahore.<br />

Muth, M. M. & Donaldson, L. 1998. Stewardship theory and board structure: A contingency approach.<br />

Corporate Governance, 6:5-27.<br />

Peel, M. & O’Donnell, E. 1995. Board structure, corporate performance, and auditor independence.<br />

Corporate Governance, 3:207-217.<br />

399


Pfeffer, J. & Salancik, G. R. 1978. The external control of organizations:a resource dependence<br />

perspective. New York: Harper and Row.<br />

Powell, W. W. 1991. Expanding the scope of institutional analysis. The new institutionalism in<br />

organizational analysis. Chicago: University of Chicago Press.<br />

Rais, R. B. & Saeed, A. 2005. Regulatory Impact Assessment of SECP’s Corporate Governance Code in<br />

Pakistan. Lahore University of Management Sciences, Lahore, CMER Working Paper 06-39.<br />

Rajagopalan, N. & Zhang, Y. 2008. Corporate governance reforms in China and India: Challenges and<br />

opportunities. Business Horizons, 51:55-64.<br />

Rechner, P. L. & Dalton, D. R. 1991. CEO duality and organizational performance: a longitudinal analysis.<br />

Strategic Management Journal, 12:155-160.<br />

Renneboog, L. 2000. Ownership, managerial control and the corporate governance of companies listed on<br />

the Brussels Stock Exchange, Journal of Banking and Finance, 24:1959-1995.<br />

Roberts, J., McNulty, T. & Stiles, P. 2005. Beyond agency conceptions of the work of the nonexecutive<br />

director: Creating accountability in the boardroom. British Journal of Management, 16:5-26.<br />

Rosenstein, S. & Wyatt, J. 1990. Outside directors, board effectiveness and shareholders’ wealth. Journal of<br />

Financial Economics, 26:175-191.<br />

Sanda, A. U., Mukaila, A. S. & Garba, T. 2003. Corporate Governance Mechanisms and Firm Financial<br />

Performance in Nigeria. Final Report Presented to the Biannual Research Workshop of the AERC,<br />

Nairobi, Kenya, 24-29.<br />

Shaheen, R. & Nishat, M. 2005. Corporate Governance and Firm Performance: An Exploratory Analysis.<br />

Paper presented in the Conference of Lahore School of Management Sciences, Lahore.<br />

Short, H. & Keasey, K. 1999. Managerial ownership and the performance of firms: evidence from the UK.<br />

Journal of Corporate Finance, 5:79-101.<br />

Shrader, B., Blackburn, V. B. & Iles, P. 1997. Women in management and firm financial value: an<br />

exploratory study. Journal of Managerial Issues, 9:355-372.<br />

Uzun, H., Szewczyk, S. H. & Varma, R. 2004. Board composition and corporate fraud. Financial Analysts<br />

Journal, 60:33-43.<br />

Van den Berghe, L. A. A. & Levrau, A. 2004. Evaluating boards of directors: What constitutes a good<br />

corporate board? Corporate Governance, 12:461-478.<br />

Wen, Y., Rwegasira, K. & Bilderbeek, J. 2002. Corporate Governance and Capital Structure Decisions of<br />

Chinese Listed Firms. Corporate Governance, 10:75-83.<br />

Yermack, D. 1996. Higher market valuation of companies with a small board of directors. Journal of<br />

Financial Economics, 40:185-221.<br />

400


CORPORATE GOVERNANCE AND COMPLIANCE WITH IFRSs - MENA EVIDENCE<br />

Marwa Hassaan & Omneya Abdelsalam<br />

Aston Business School, Birmingham, UK<br />

Email: hassanm3@aston.ac.uk<br />

Abstract This paper aims to examine the influence of corporate governance as a newly introduced concept in the MENA region on<br />

the levels of compliance with IFRSs disclosure requirements by companies listed on two leading stock exchanges in the region; CASE<br />

and ASE. This study employs a cross-sectional analysis of all non-financial companies listed on both stock exchanges for the fiscal<br />

year ending December, 2007. Using a disclosure index derived from mandatory IFRSs disclosure requirements for the fiscal year<br />

beginning January, 2007; this study measures the levels of compliance by companies listed on focus stock exchanges. An innovative<br />

theoretical foundation is deployed, in which compliance is interpretable through three lenses; the institutional isomorphism theory,<br />

secrecy versus transparency as one of Gray's accounting sub-cultural values and economic-based theories. This study extends the<br />

financial reporting literature, cross-national comparative financial disclosure literature and the emerging markets disclosure literature<br />

by carrying out one of the first comprehensive comparative studies in the MENA region. Results provide evidence of de jure but not<br />

de facto compliance with IFRSs disclosure requirements by scrutinised MENA countries. In broad terms the influence of corporate<br />

governance best practices on the levels of compliance with IFRSs is limited as it is not yet part of the cultural values in the MENA<br />

society. The results of the multivariate analysis shows that levels of compliance with the IFRSs disclosure requirements can best be<br />

explained by ownership structure. Companies with a higher proportion of public ownership, comply less with IFRSs disclosure<br />

requirements. These findings are consistent with the notions of the proposed theoretical foundation.<br />

Keywords IFRSs, MENA region, Egypt, Jordan, CASE, ASE, Role Duality, Board Composition, Ownership Structure, Isomorphism<br />

and Gray’s Accounting Sub-culture Model<br />

1 Introduction<br />

In a global economy, the financial reporting practices by companies around the world are a key issue. Globalization<br />

of the capital markets has increased the need for high-quality, comparable financial information consequently,<br />

pressure has been increasing for adoption of a single set of accounting standards worldwide (Joshi et al., 2008).<br />

The use of the IFRSs is expected to improve the comparability of financial statements, enhances corporate<br />

transparency, and hence increases the quality of financial reporting worldwide (Daske et al., 2008). Furthermore, for<br />

the Middle East and North Africa (MENA) capital markets as emerging economies, compliance with the IFRSs may be<br />

essential in order to foster their economic transition (CIPE, 2003) as it would attract more direct foreign investments.<br />

This study is motivated by a belief that achieving de facto compliance with the IFRSs by the MENA region listed<br />

companies is not an easy task. It is an on-going process which requires a strong support from researchers, capital<br />

market authorities, accounting regulators, business firms, accounting practitioners and other stakeholders.<br />

The MENA region capital markets that are under scrutiny in this study (Egypt and Jordan) mandated the adoption of<br />

the IFRSs by companies listed on their stock exchanges in 1997 and 1998 respectively. Consequently, they can be<br />

considered as early adopters of the IFRSs compared to the European Union countries that only required companies<br />

listed on their stock exchanges to prepare their financial statements in accordance with the IFRSs in 2005 and<br />

mandated their adoption in 2007. This fact raises the need to investigate the obstacles to full compliance with the<br />

IFRSs especially after the introduction of corporate governance concept in the MENA region which is supposed to<br />

enhance the levels of disclosure and transparency by publicly listed companies.<br />

The review of prior compliance literature reveals that there is a shortage in the number of financial disclosure studies<br />

that investigated emerging capital markets in general and the MENA capital markets in particular. Consequently, this<br />

study is expected to contribute to filling this gap. On the other hand, as financial disclosure lies at the core of all<br />

corporate governance statutes and codes, investigating the association between corporate governance as a newly<br />

introduced concept in the region and the levels of compliance with the IFRSs disclosure requirements is expected to<br />

enrich financial disclosure as well as corporate governance literature. Moreover, this study to the best of the<br />

researchers' knowledge is the first to use the institutional isomorphism theory in providing a theoretical foundation for<br />

the impact of corporate governance structures on the levels of compliance with the IFRSs in the MENA region.<br />

This paper addresses five research questions:<br />

1. What is the extent of compliance with the IFRSs disclosure requirements by companies listed on the two<br />

selected MENA region stock exchanges?<br />

2. How could differences in levels of compliance with the IFRSs be explained by board of directors (BOD)<br />

independence?<br />

401


3. How could differences in levels of compliance with the IFRSs be explained by role duality?<br />

4. How could differences in levels of compliance with the IFRSs be explained by ownership structure?<br />

5. To what extent do institutional isomorphism theory, cultural theories and economic-based theories help to<br />

explain the levels of compliance with the IFRSs disclosure requirements within the MENA context?<br />

In order to answer the research questions the remaining part of the paper is organized as follows: A literature review is<br />

provided in Section 2. Section 3 develops and formulates research hypotheses. Section 4 describes sample selection,<br />

data collection, and research methods. Results and analysis are presented in Section 5. Finally, Section 6 concludes.<br />

2 Literature Review and Formulation of Research Hypothesis<br />

Financial disclosure is a rich field of empirical enquiry (Healy & Palepu, 2001). More recently, researchers became<br />

more concerned with investigating the issue of adopting a unified set of accounting standards worldwide. This line<br />

of research is concerned with investigating the applicability of full compliance with the IFRSs and the association<br />

between levels of compliance with the IFRSs and disclosure environment attributes. The importance of evaluating<br />

the levels of compliance with mandatory disclosure requirements, proved by the findings of this line of research<br />

which reports the existence of low level of compliance with mandatory disclosure requirements particularly in<br />

developing countries (e.g., Abdelsalam & Weetman, 2003; Glaum & Street, 2003; Owusu-Ansah & Yeoh, 2005;<br />

Samaha, 2006; Dahawy & Conover, 2007; Al-Shammari et al., 2008).<br />

For emerging capital markets good corporate governance practices may be essential for a sound success of their reform<br />

programmes and for reserving a healthy investment environment. Following a series of events that took place over the<br />

last two decades addressed by the Asian financial crisis and the wide spread of high-profile corporate scandals such as<br />

WorldCom and Enron, empirical research into accounting disclosure practices began to consider the impact of<br />

corporate governance structures on disclosure practices. The development of corporate governance is a global<br />

occurance thus is influenced by legal, cultural, ownership and other structural differences (Mallin, 2009: 13). To date<br />

corporate governance does not have a widely accepted paradigm or theoretical foundation (Tricker, 2009: 233).<br />

Transparency, fairness and accountability are the core values of corporate governance. Stemming from the desire to<br />

enhance access to more capital that is necessary to achieve economic development and globalise their economies,<br />

corporate governance practices have been brought in the spotlight in developing countries. In this regard many<br />

researchers highlight the influence of corporate board composition and ownership structure (e.g., Eng & Mak, 2003,<br />

Ghazali & Weetman, 2006; Ezat & El-Masry, 2008) on disclosure practices of companies listed on emerging stock<br />

exchanges. However, we suggest that as corporate governance culture was initiated in developed countries and as it is<br />

newly introduced in developing countries, their contribution to enhancing capital markets' performance is subject to the<br />

extent to which the requirements for good corporate governance practices are consistent with the existing values, past<br />

experiences and the needs of all parties involved in the financial reporting process. Otherwise, it is expected to take<br />

some time until the impact of corporate governance culture can be measured. This is because it needs developing an<br />

understanding, forming a favourable attitude and belief and developing the skills required to apply corporate<br />

governance best practices.<br />

3 Development of Hypotheses<br />

We have chosen variables to represent country (stock exchange in which the company is listed), particular aspects of ownership<br />

and board composition (role duality and board composition). In each case we state an expectation based on prior literature.<br />

3.1 Country<br />

although Egypt and Jordan are not homogeneous in terms of their capital markets capacities to practice and enforce<br />

compliance with the IFRSs and corporate governance principles (CIPE, 2003) mainly due to the shortage in<br />

qualified accountants in the Jordanian context, the similarities in their legal, economic and cultural contexts are<br />

expected to reduce the differences in the levels of compliance with the IFRSs between both countries. In addition,<br />

cultural similarities and the novelty of corporate governance concept in both jurisdictions are expected to reduce<br />

differences in the dominant corporate governance structures between Egypt and Jordan. Accordingly, the first<br />

research hypothesis can be stated as follows:<br />

H1: There is no significant statistical differences between the Egyptian and the Jordanian contexts.<br />

402


This hypothesis can be further divided into the following two hypotheses.<br />

H1a: There is no significant statistical differences between Egypt and Jordan in the levels of compliance with the<br />

IFRSs disclosure requirements.<br />

H1b: There is no significant statistical differences between Egypt and Jordan in the dominant corporate governance structures.<br />

3.2 Role Duality<br />

Role dualityis a governance issue that concerns with whether the chief executive officer (CEO) is also thechair of the board of directors.<br />

Separating the two positions has the potential to improve the monitoring function of the board and to reduce the<br />

advantages gained by withholding information, hence to improve the quality of reporting (Arcay & Vazquez, 2005).<br />

Combining both roles reduces the availability of independent evaluation of the CEO's performance as the CEOs<br />

themselves will select which information to be provided to other directors (Jensen, 1993). In addition, dual role<br />

creates a strong individual power base that could impair board independence thus the effectiveness of its governing<br />

function may be compromised (Abdelsalam & Elmasry, 2008).<br />

On the other side, some researchers argue that the separation of the two positions is not essential for better<br />

performance (Dahya et al., 1996; Gul & Leung, 2004). Furthermore, Rechner & Dalton (1991) suggest that role<br />

duality leads to a clear unfettered leadership of boards and companies.<br />

With respect to the MENA region listed companies including Egypt and Jordan, role duality is common as the majority<br />

of companies are family-owned which makes it difficult from their perspectives to induce an owner of a company who<br />

invested money to step aside and allow others to manage his money (CIPE, 2003; IFC & Hawkamah, 2008).<br />

The results of prior research investigated the association between role duality and levels of financial disclosure are<br />

mixed. Some studies show that role duality is significantly associated with a lower level of financial disclosure (e.g.,<br />

Haniffa & Cooke, 2002; Gul & Leung, 2004; Arcay & Vazquez, 2005; Abdelsalam & Elamasry, 2008) while Ghazali<br />

& Weetman (2006) report an insignificant relationship. On the other hand, some empirical research show that there is<br />

no association between role duality and financial disclosure or reporting quality (Cheng & Courtenay, 2006; Ghazali &<br />

Weetman, 2006) and one study (Felo, 2009) reports a positive relationship between role duality and financial disclosure<br />

practices. The contradictory nature of these results raises the need to re-examine this relationship and make it difficult<br />

to predict the type of the relationship between role duality and levels of compliance with IFRSs in the scrutinised<br />

MENA capital markets. Accordingly, the second research hypothesis can be stated as follows:<br />

H2: There are no statistically significant differences in the levels of compliance with the IFRSs disclosure<br />

requirements between companies that separate the positions of the CEO and the Chair and those that do not.<br />

3.3 Board independence<br />

Board independence is an issue that concerns with the number of independent non-executive directors (outsiders)<br />

compared to the number of executive directors (insiders) on firm boards.<br />

Typically a board with more independent directors is expected to be more effective in monitoring management and<br />

hence leads to improved financial disclosure (Haniffa & Cooke, 2002; Dey, 2008). It is expected that insiders cannot<br />

effectively monitor themselves on behalf of shareholders (Muslu, 2005).<br />

Dominance of non-executive directors in the board will maximize their ability to enforce management to meet the<br />

disclosure requirements of different stakeholder groups (Haniffa & Cooke, 2002; Ghazali & Weetman, 2006).<br />

Non-executive directors further to their prestige, expertise and contacts are respected due to their wisdom and<br />

independence (Haniffa & Cooke, 2002).<br />

Opponents argue that increasing the number of non-executive directors on the board result in excessive monitoring<br />

(Baysinger & Butler, 1985), or lack of genuine independence (Demb & Neubauer, 1992). Insiders also can provide<br />

operational efficiency (Muslu, 2005).<br />

Concerning the attribute toward board independence in the MENA region, the existence of a number of non-executive<br />

directors on the board is recognized in companies listed on scrutinised MENA stock exchanges. However, the issue is<br />

that in most cases non-executive board members lack independence or they may lack experience (CIPE, 2003: 37).<br />

Findings of prior research that examined the association between board independence and financial disclosure are mixed<br />

which makes it difficult to predict the relationship between board independence and levels of compliance with the IFRSs<br />

in the scrutinised MENA capital markets thus raises the need to revisit this issue. Some researchers report a positive<br />

relationship (e.g., Arcay & Vazquez, 2005; Cheng & Courtenay, 2006; Abdelsalam & Street, 2007; Abdelsalam &<br />

Elmasry, 2008; Ezat & Elmasry, 2008; Felo, 2009). In contrast, some researchers report a negative relationship (Eng &<br />

Mak, 2003; Gul & Leung, 2004; Muslu, 2005) while other researchers did not find any relationship (Haniffa & Cooke,<br />

2002; Ghazali & Weetman, 2006). Accordingly, the third research hypothesis can be stated as follows:<br />

403


H3: There is no relationship between BOD independence and the extent of compliance with the IFRSs disclosure requirements.<br />

3.4 Ownership Structure<br />

Ownership structure is defined by Denis & McConnell (2003: 3) as ‘The identities of a firm's equity holders and the<br />

sizes of their positions’.<br />

The ownership structure of a firm may be a possible determinant of firm disclosure practices (Eng & Mak, 2003; Arcay &<br />

Vazquez, 2005). High levels of concentration of capital may be accompanied by the owner’s considerable involvement in<br />

the firm’s management, which in turn may lead to unrestricted access to information thus may limit the demand hence the<br />

supply for company information and vice versa (Haniffa & Cooke, 2002; Arcay & Vazquez, 2005; Ezat & El-Masry, 2008).<br />

On the other hand, when share ownership is widely held, the potential for conflicts of interests between the principal<br />

and the agent is greater than in closely held companies. As a result, disclosure is likely to be greater in widely held<br />

companies to enable the principal to effectively monitor whether his/her economic interests are optimized and whether<br />

the agent acts in the best interests of the principal as an owner of the firm (Fama & Jensen, 1983; Chen & Gray, 2002).<br />

The review of the dominant ownership structures in the focus MENA stock exchanges reveals that shares of most<br />

companies are family owned or government owned (CIPE, 2003; Al-Htaybat, 2005; Naser et al., 2006; Tricker,<br />

2009). The dominance of family ownership is expected to have a negative impact on the level of financial disclosure<br />

for two reasons; firstly, they have direct access to company information (Naser et al., 2006). Secondly, the secretive<br />

nature of these societies makes family shareholders encourage management to keep disclosure at minimum levels as<br />

long as costs of compliance exceeds costs of non- compliance regardless of the impact on the interests of minority<br />

shareholders. With respect to the impact of dominant government ownership there are two distinct points of view.<br />

The first argues that in the case of dominant government ownership, disclosure levels will be low as government can<br />

directly request any information they need from company management (Naser et al., 2006). The other point of view<br />

sees dominant governmental ownership advantageous as based on agency theory it will improve disclosure practices<br />

as the government will encourage management to use competent disclosure policies (Suwaidan, 1997) thus to reduce<br />

monitoring costs management will choose to comply with the IFRSs.<br />

Based on the review of the patterns of ownership structure in the listed MENA companies under scrutiny in this study<br />

and the availability of ownership structure related data for these companies, this study examines the influence of<br />

ownership structure on the levels of compliance with the IFRSs in the scrutinised MENA capital markets using four<br />

distinct measures; government ownership (defined as the percentage of company shares owned by the government),<br />

management ownership (defined as the percentage of company shares owned by company management and other<br />

board members), private ownership (defined as the percentage of company shares owned by private shareholders) and<br />

public ownership (defined as the percentage of company shares owned by the free float that is less than 5%).<br />

3.4.1 Government ownership<br />

The results of prior studies that examined the association between government ownership and levels of financial<br />

disclosure are mixed. For instance, Eng & Mak, (2003) report a positive relationship while Naser et al. (2006) report<br />

a negative relationship and Ghazali & Weetman (2006) find a negative but insignificant relationship.<br />

Accordingly, built on the above discussion research hypothesis (4a) can be stated as follows:<br />

H4a: There is no relationship between government ownership ratio and the extent of compliance with the IFRSs<br />

disclosure requirements.<br />

3.4.2 Management Ownership<br />

The results of the majority of prior studies that examined the association between management ownership and levels<br />

of financial disclosure show a negative association (e.g., Eng & Mak, 2003; Arcay & Vazquez, 2005; Ghazali &<br />

Weetman, 2006; Abdelsalam & El-Masry, 2008). Thus, the effect of managerial ownership on levels of compliance<br />

with the IFRSs is expected to be substitutive. Accordingly, research hypothesis (4b) can be stated as follows:<br />

H4b: There is a negative relationship between management ownership ratio and the levels of compliance with the IFRSs.<br />

3.4.3 Private Ownership<br />

Similar to government ownership, there is no consensus among prior reseachers regarding the influence of private<br />

ownership on the levels of compliance with financial disclosure requirements. Some researchers report that private<br />

ownership may be a complementary (Diamond & Verrenchia, 1991 as cited in Haniffa & Cooke, 2002). On the<br />

contrary, some researchers report that concentrated ownership and financial disclosure are substitutes (dominance of<br />

404


private shareholders reduces levels of financial disclosure) such as Naser et al. (2006) while others report no<br />

association between the dominance of private ownership and levels of financial disclosure (Suwaidan, 1997; Depoers,<br />

2000; Omar, 2007). Accordingly, research hypothesis (4c) can be stated as follows:<br />

H4c: There is no relationship between private ownership ratio and the extent of compliance with IFRSs disclosure requirements.<br />

3.4.4 Public Ownership<br />

The results of most prior studies show a positive association between public ownership and levels of financial<br />

disclosure (e.g., Haniffa & Cooke, 2002; Al-Htaybat, 2005; Arcay & Vazquez, 2005). Accordingly, research<br />

hypothesis (4d) can be stated as follows:<br />

H4d: There is a positive relationship between public ownership ratio and the extent of compliance with the IFRSs<br />

disclosure requirements.<br />

3.5 Control Variables<br />

The review of prior financial disclosure studies led to the decision to incorporate BOD size and three firm-specific<br />

characteristics in the multivariate analysis as control variables; company size (total assets), type of business activity<br />

(non-manufacturing versus manufacturing) and type of audit firm (Big 4 versus non Big 4).<br />

Prior research results are mixed regarding the direction of the association between BOD size and disclosure<br />

practices. Hence, we include BOD size as a control variable but do not predict the direction of the association.<br />

Based on the findings of prior researchers (Eng & Mak, 2003; Akhtaruddin, 2005; Aksu & Kosdag, 2006 and Naser<br />

et al., 2006) we include firm size as a control variable and we anticipate a positive relationship between firm size<br />

and levels of compliance with the IFRSs disclosure requirements.<br />

Based on the evidence provided by prior research that the type of business activity influences its disclosure practices (e.g,<br />

Cooke, 1992; Haniffa & Cooke, 2002), type of business activity will be employed in this study as a control variable.<br />

Based on the evidence provided by prior research that there is a relationship between the type of auditor and levels<br />

of company disclosure (e.g., Patton & Zelenka, 1997; Glaum & Street 2003), type of auditor will be employed in<br />

this study as a control variable and it is expected that auditing by a Big 4 audit firm has a positive impact on the<br />

levels of compliance with the IFRSs.<br />

4 Methodology<br />

4.1 Sample Selection<br />

This study applies to the annual reports for the fiscal year ending December 2007 for the entire population of nonfinancial<br />

companies listed on scrutinised MENA region stock exchanges with total 311 companies (145 companies<br />

from Egypt and 166 from Jordan). After excluding companies for which no corporate governance information for<br />

2007 was available or those for which the 2007 complete annual report was not available in either Zawya database<br />

or EGID (Egypt for Information Dissemination Company), the final sample contained 102 companies (75 from<br />

Egypt and 27 from Jordan).<br />

4.2 Disclosure Checklist<br />

To meet the purpose of this study the researchers use a self-constructed disclosure checklist based on the IFRSs<br />

required to be followed by the IASB in preparing the financial statements for the fiscal year beginning January<br />

2007.The disclosure index employed in this study includes 275 IFRSs based items 1 . Thus it can be considered<br />

amongst the most comprehensive mandatory disclosure indices applied in the MENA region. The disclosure index<br />

for each company was calculated as the ratio of the total actual score awarded to the maximum possible score of<br />

relevant items applicable for that company.<br />

The calculation of the disclosure index (dependent variable) for each company under this approach is as follows:<br />

DI = ADS/ MD<br />

Where:<br />

DI refers to the disclosure index (0≤ DI≤1)<br />

1 The disclosure index employed in this study is not presented here to save space but available from authors upon request<br />

405


ADS refers to the aggregate disclosure score for a particular company<br />

MD refers to the maximum score possible for that company (≤ 275 items)<br />

4.3 Data Collection and Regression Model<br />

The data on the chosen independent variables (see below) were obtained from Zawya database and from EGID. The<br />

following multiple regression model is proposed:<br />

Yj= β0+ β1 countryj + β2 role dualityj + β3 board independencej + β4 government ownership ratioj + β5<br />

management ownership ratioj + β6 private ownership ratioj + β7 public ownership ratioj + β8 board sizej + β9 total<br />

assetsj + β10 type of business activityj + β11 type of audit firmj + Ɛj<br />

Where:<br />

Yj= Disclosure index for companies (j=1,…, 102) which denotes the dependent variable;<br />

β0= The intercept;<br />

Ɛj= Error term<br />

The regression model incorporates two categorical test variables, country (1 if Jordan, 0 if Egypt) and role duality (1 if<br />

the chairman is not the CEO, 0 if the chairman is the CEO). It also incorporates two categorical control variables, type<br />

of business activity (1 if non- manufacturing; 0 if manufacturing) and type of audit firm (1 if a big 4, 0 if not a big 4).<br />

5 Results and Analysis<br />

5.1 Descriptive Statistics<br />

The results of the analysis of the 102 annual reports of companies listed on the two scrutinised stock exchanges,<br />

CASE (75 companies) and ASE (27 companies) as demonstrated in table 1 show that the average level of<br />

compliance with the IFRSs disclosure requirements is 80% in Egypt and 78% in Jordan. This implies that there is a<br />

relative similarity between the levels of compliance with the IFRSs disclosure requirements between the two stock<br />

exchanges. However, it is also recognised that the minimum level of compliance with the IFRSs disclosure<br />

requirements by companies listed on CASE is higher than that of companies listed on ASE (68% and 59%<br />

respectively). This difference may be attributed to the shortage in qualified accountants in Jordan.<br />

Country<br />

Egypt<br />

Jordan<br />

Minimum<br />

%<br />

68<br />

59<br />

Maximum<br />

%<br />

91<br />

87<br />

Table 1: Comparison between Egypt and Jordan in terms of the levels of compliance<br />

with the IFRSs disclosure requirements (total score)<br />

Mean<br />

%<br />

80<br />

78<br />

Applicability<br />

75<br />

27<br />

Number of companies<br />

got above 50%<br />

75<br />

27<br />

5.2 Testing Differences between the Egyptian and the Jordanian Contexts (H1)<br />

Percentage of companies<br />

got above 50%<br />

100%<br />

100%<br />

In order to investigate whether there are significant statistical differences between the Egyptian and the Jordanian<br />

contexts (the first research hypothesis), the researcher employed the Mann-Whitney U test 2 .<br />

Mann-Whitney U test has been used to examine whether there are significant statistical differences among companies'<br />

disclosure indices (dependent variables), and among independent variables (board independence, role duality, ownership<br />

structure, BOD size, firm size, business activity and audit firm) between companies listed on CASE and those listed on<br />

ASE. Accordingly the first research hypothesis has been further divided into the following two sub-hypotheses.<br />

5.2.1 Significant Differences between Egypt and Jordan in terms of the Levels of Compliance with the IFRSs<br />

Disclosure requirements<br />

The results demonstrated in table 2 show that there are no statistically significant differences between the two groups of<br />

companies: those listed on CASE and those listed on ASE with respect to total disclosure score (the aggregate level of<br />

compliance with the IFRSs disclosure requirements) which is the main dependent variable in this study (P>.05).<br />

2 Normality tests were performed using Shapiro-Wilk and K-S tests (not presented here to save space but available from authors upon request)<br />

showed that data distribution is not normal thus following prior research (e.g., Ghazali & Weetman, 2006) the researcher decided to use nonparametric<br />

tests and to use normal scores in multivariate analysis.<br />

406<br />

Rank<br />

NA<br />

NA


Table 2: Mann-Whitney U Test Results for the Dependent Variable<br />

Disclosure Index (N=102) Mann-Whitney U Wilcoxon W Z Asymp. Sig. (2-tailed)<br />

Total Score 895.500 .013 -.887 .375<br />

5.2.2 Significant Differences between Egypt and Jordan in terms of the Independent Variables<br />

The results demonstrated in table 3 show that there are no statistically significant differences (P>.05) with respect to<br />

all independent variables except two of the test variables: role duality and board independence (P


Constant .027 .094 .287 .775<br />

Public ownership ratio -.281 .115 -.238 -2.436 .017 1.000 1.000<br />

As seen in table 4 the adjusted R 2 = .047. This implies that approximately 5% of the variation in the aggregate<br />

disclosure index is explained by variation in public ownership ratio. In addition, the model reaches statistical<br />

significance whereas F= 5.932 and the significance =.017 (


esults in the restriction of information to preserve power inequalities. On the other side, this lends support to agency<br />

theory (as government, private and management stockholders are in a position to get access to all company<br />

information they need and as public investors do not demand more compliance with IFRSs disclosure requirements<br />

and do not put any pressures on BOD or management to improve compliance, monitoring costs will be low so will<br />

reduce management incentives to comply with the IFRSs disclosure requirements. In addition, this result is<br />

consistent with the notions of cost-benefit analysis as the weak enforcement of the IFRSs disclosure requirements<br />

and weak sanctions if any causes non-compliance costs to be less than compliance costs for companies listed on<br />

scrutinised MENA stock exchanges, thus direct management incentives toward non-compliance. All of this<br />

contributes to the problem of decoupling as companies listed on MENA stock exchanges state that they prepare their<br />

financial statements in accordance with the IFRSs, while none of them fully comply with these requirements.<br />

On the other side, low explanatory power of the regression model calls for carrying out a number of interviews with<br />

different parties involved directly or indirectly with financial reporting process in the MENA emerging markets to<br />

explore the other factors that best explain levels of compliance with the IFRSs disclosure requirements in the region<br />

that are difficult to quantify and analyse using quantitative research methods.<br />

References<br />

Abd-Elsalam, O. H. & Weetman, P. (2003). `Introducing International Accounting Standards to an Emerging Capital<br />

Market: Relative Familiarity & Language Effect in Egypt'. Journal of International Accounting. 12, pp. 63-84.<br />

Abdelsalam, O. & El-Masry, A. (2008). ‘The Impact of Board Independence and Ownership Structure on the Timeliness<br />

of Corporate Internet Reporting of Irish-listed Companies’. Managerial Finance. Vol. 34 (12). pp. 907-918.<br />

Akhtaruddin, M. (2005), `Corporate Mandatory Disclosure Practices in Bangladesh'. The International Journal<br />

ofAccounting. 40, pp. 399-422.<br />

Aksu, M. & Kosedag, A. (2006). `Transparency and Disclosure Scores and their Determinants in Istanbul Stock<br />

Exchange'. Corporate Governance. Vol. 14 (4). pp. 277- 296.<br />

Al-Htaybat, K. (2005). ‘Financial Disclosure Practices: Theoretical Foundation and an Empirical Investigation on<br />

Jordanian Printed and Internet Formats’. PhD Thesis. University of Southampton. UK.<br />

Al-Shammari, B., Brown, P. & Tarca, A. (2008). ‘An Investigation of Compliance with International Accounting Standards by Listed<br />

Companies in the Gulf Co-operation Council Member States’. The International Journal of Accounting. Vol. 43. pp. 425-447.<br />

Arcay, M.R.B. & Vazquez, M.F.M. (2005). ‘Corporate Characteristics, Governance Rules and the Extent of<br />

Voluntary Disclosure in Spain’. Advances in Accounting. Vol. 21. pp. 299-331.<br />

Al-Jifri, K. (2008). ‘Annual Report Disclosure in a Developing Country: The Case of the UAE’. Advances in<br />

Accounting Incorporating Advances in International Accounting. Vol. 24. pp. 93-100.<br />

Baysinger, B., & Butler, H. (1985). ‘Corporate Governance and Board of Directors: Performance Effects of<br />

Changes in Board Composition’. Journal of Law. Economics and Organization 1: 101–24.<br />

CIPE (2003). ‘Corporate Governance in Morocco, Egypt, Lebanon and Jordan- Countries of the MENA Region’.<br />

Middle East and North Africa Corporate Governance Workshop. The Center for International Private Enterprise.<br />

Chau, G. K., & Gray, S. J. (2002). ‘Ownership Structure and Corporate Voluntary Disclosure in Hong Kong and<br />

Singapore’. The International Journal of Accounting. Vol. 37. pp. 247–265.<br />

Cheng, E.C.M. & Courtenay, S.M. (2006). Board Composition, Regulatory Regime, and Voluntary Disclosure. The<br />

International Journal of Accounting. Vol. 41. pp. 262-289.<br />

Dahawy, K. & Conover, T. (2007). ‘Accounting Disclosure in Companies Listed on the Egyptian Stock Exchange’.<br />

Middle Eastern Finance and Economics. Issue (1). pp. 5-20.<br />

Dahya, J., Lonie, A.A. & Power, D.M. (1996). ‘The case for separating the roles of Chairman and CEO: an analysis<br />

of stock market and accounting data’, Corporate Governance – An International Review. Vol. 4 (1). pp. 71-77.<br />

Daske, H. (2006). `Economic Benefits of Adopting IFRS or US-GAAP- Have the Expected Cost of Equity Capital<br />

Really Decreased?'. Journal of Business Finance and Accounting Research. Vol. 33 (3). pp. 329-373.<br />

Dey, A. (2008). ‘Corporate Governance and Agency Conflicts’. Journal of Accounting Research. Vol. 46 (5). pp.1143-1181.<br />

Depoers, F. (2000), `A Cost-Benefit Study of Voluntary Disclosure: Some Empirical Evidence from French Listed<br />

Companies', The European Accounting Review, 9, (2), pp.245-263.<br />

409


Diamond, D. W. & Verrecchia, R. E. (1991), `Disclosure, Liquidity and the Cost of Capital'. The Journal of Finance.<br />

Vol. 46 (4). September. pp. 1325-1359.<br />

Eng, L.L. & Mak, Y.T. (2003). ‘Corporate Governance and Voluntary Disclosure’. Journal of Accounting and<br />

Public Policy. Vol. 22 (4). pp. 325-345.<br />

Ezat, A., El-Masry, A. (2008). ‘The Impact of Corporate Governance on the Timeliness of Corporate Internet<br />

Reporting by Egyptian Listed Companies’. Managerial Finance. Vol. 34 (12). pp. 848-867.<br />

Fama, E. F. & Jensen, M.C. (1983). `Separation of Ownership and Control', Journal of Law and Economics. Vol. 88. pp. 301-325.<br />

Felo, A.J. (2009). ‘Voluntary Disclosure Transparency, Board Independence and Expertise, and CEO Duality’.<br />

Working Paper. http:// ssrn.com/abstract=1373942.<br />

Forker, J. (1992). ‘Corporate Governance and Disclosure Quality’. Accounting and Business Research. Vol. 22 (86). pp. 111-124.<br />

Field, A. (2005). Discovering Statistics Using SPSS. (2 nd Edition). London: Sage Publications Ltd<br />

Ghazali, N. A. M. & Weetman, P. (2006), `Perpetuating Traditional Influences: Voluntary Disclosure in Malaysia<br />

Following the Economic Crisis', Journal of International Accounting, Auditing and Taxation. 15, pp. 226-248.<br />

Glaum, M. & Street, D. L. (2003). `Compliance with the Disclosure Requirements of Germany's New Market: IAS<br />

versus US GAAP'. Journal of International Financial Management and Accounting. Vol. 14 (1). pp. 64-100.<br />

Gul, F.A. & Leung, S. (2004), ‘Board Leadership, Outside Directors Expertise and Voluntary Corporate<br />

Disclosures’. Journal of Accounting and Public Policy. Vol. 23, pp. 351-79.<br />

Gray, S. J. (1988). ‘Towards a theory of cultural influence on the development of accounting systems<br />

Internationally’. Abacus. Vol. 24 (1). March. pp. 1–15.<br />

Haniffa, R. M. & Cooke, T. E. (2002). `Culture, Corporate Governance and Disclosure in Malaysian Corporations'.<br />

Abacus. Vol. 38 (3). pp. 317-349.<br />

Healy, P. M. & Palepu, G. (2001). `Information Asymmetry Corporate Disclosure, and the Capital Markets: A<br />

Review of the Empirical Disclosure Literature'. Journal of Accounting and Economics. Vol.31. pp. 405-440.<br />

Jensen, M.C. (1993). ‘The Modern Industrial Revolution, Exit, and the Failure of Internal Control Systems’. The<br />

Journal of Finance. Vol. 48. pp. 831-880.<br />

Joshi, P.L., Bresmer, W.G. & Al-Ajmi, J. (2008). ‘Perceptions of Accounting Professionals in the Adoption and<br />

Implementation of a Single Set of Global Accounting Standards: Evidence from Bahrain’. Advances in<br />

Accounting Incorporating Advances in International Accounting. Vol. 24. pp. 41-48.<br />

Mallin, C.A. (2009). Corporate Governance. (3 rd Edition). New York: Oxford University Press.<br />

Muslu,V. (2005). ‘Effect of Board Independence on Incentive Compensation and Compensation Disclosure:<br />

Evidence from Europe’.Working Paper. available: http://www.ssrn.com. Accessed: 23-9-2009.<br />

Nasr, K., Al-Hussaini, A., Al-Kwari, D. & Nuseibeh, R. (2006). ‘Determinants of Extent of Corporate Social Disclosure in<br />

Developing Countries: The Case of Qatar’. Advances in International Accounting. Vol. 19. pp. 1-23.<br />

Omar, B.F.A (2007). ‘Exploring the Aggregate, Mandatory and Voluntary Financial Disclosure Behaviour Under a<br />

New Regulatory Environment: The Case of Jordan’. PhD Thesis. The University of Hull. UK.<br />

Owusu-Ansah,S. & Yeoh, J.(2005). `The Effect of Legislation on Corporate Disclosure Practices'. Abacus. Vol.41, No. 1,pp.92-109.<br />

Patton, J. & Zelenka, I. (1997). ‘An empirical analysis of the determinants of the extent of disclosure in annual reports of<br />

joint stock companies in the Czech Republic’. The European Accounting Review. Vol.6 (4). pp 605-626.<br />

Samaha, K. (2006). 'Compliance with International Accounting Standards: Some Empirical Evidence from the Cairo &<br />

Alexandria Sock Exchange'. Accounting, Management & Insurance Review. Issue (7). Cairo University Press.<br />

Suwaidan,M.S.(1997).`VoluntaryDisclosureof AccountingInformation:thecaseofJordan'.PhDThesis.UniversityofAberdeen.UK.<br />

Tricker, Bob (2009). Corporate Governance: Principles, Policies and Practices. New York: Oxford University Press.<br />

IFC & Hawkamah (2008). 'A Corporate Governance Survey of Listed Companies and Banks Across the Middle East<br />

and North Africa'. Available: http://www.ifc.org/ifcext/mena.nsf. Accessed 25/2/2010.<br />

410


BUSINESS PERFORMANCE EVALUATION MODELS AND DECISION SUPPORT SYSTEM FOR THE<br />

ELECTRONIC INDUSTRY<br />

Wu Wen, Department of Information Management, Lunghwa University of Science and Technology, Taiwan, R.O.C.<br />

Email:wenwu@mail.lhu.edu.tw<br />

Abstract. In the paper, a decision support system is built for evaluating business performance for manufacturing companies of the<br />

electronic industry. To select better performance indices in the decision support system, we adopt literature review, expert’s<br />

questionnaire, and factor analysis-principal component analysis. Through two-phase expert meeting and questionnaires, 16 out of 28<br />

indices are selected. Next, based on the result of factor analysis, there are four factors (i.e., categories) named as (1) Profitability<br />

Ability-return on assets, return on stockholders’ equity, return on investment, and net profit margin, (2) Efficiency Ability- average<br />

collection period, accounts receivable turnover, inventory turnover, working capital, and (3) Liquidity- current ratio, quick ratio, and<br />

cash ratio. Additionally, the non-financial 5 indices are included as the fourth factor. Besides, an artificial neural network model is<br />

created for forecasting net profit of a company. Finally, analysis hierarchy process (AHP) is adopted to evaluate business performance<br />

and a management decision support system is built for providing some managerial suggestions to middle or top managers for<br />

conducting business.<br />

Keywords: Factor analysis; Principal component analysis; Analytic hierarchy process; Business Performance financial index<br />

1 Introduction<br />

Accurate business performance evaluation is a key to success for enterprises. Particularly, a company in the<br />

competitive environment of the 21st century requires substantial financial and non-financial analysis, rapid response,<br />

efficient management, and high quality of products and services for maintaining superiority. In the past, manual selfannual<br />

reports on financial statements such as income statement and balance sheet would be done and used to<br />

examine a company’s performance. Meanwhile, without considering other competitors in the same industry, a<br />

company will lead to self-satisfaction. Furthermore, in the modern era, it is unwise to merely calculate models by<br />

hand. Conversely, we should build a business performance evaluation decision support system for providing optimal<br />

suggestions to the managers for conducting their company for facing the rapid changing global environment.<br />

Therefore, this paper employs basic statistics, principal component analysis, and analytic hierarchy process method<br />

to build a business performance evaluation model and decision support system for middle or top managers to<br />

conduct their company.<br />

As we know that business performance can be measured by using financial and non-financial factors. Based on<br />

literature review, we have studied many financial and non-financial indices related to business performance.<br />

Grafton, Lillis, and Widener (Grafton et al., 2010) used a structure equation model to analyze the degree of<br />

commonality between measures identified as decision-facilitating and decision-influencing, which is significantly<br />

associated with the use of decision-facilitating measures for both feedback and feed-forward control. They also<br />

suggested managers to use the multiple financial and non-financial performance indicators increasingly incorporated<br />

in contemporary performance measurement systems. Saranga and Moser (Saranga & Moser, 2010) believed that<br />

owing to the potential to strategically influence both operational performance as well as financial performance<br />

outcomes, Purchasing and Supply Management (PSM) today is increasingly playing an important role to senior<br />

managers. They developed a comprehensive performance measurement framework using the classical and two-stage<br />

Value Chain Data Envelopment Analysis models, which make use of multiple PSM measures at various stages. In<br />

their paper, a single efficiency measure that estimates the all-round performance of a PSM function was provided.<br />

Eddy, Paula, and Van (Eddy et al., 2010) surveyed that an organization and the presentation of performance<br />

measures are affected by how evaluators weight financial and non-financial measures. Through two experiments,<br />

they found that when the performance differences belong to the financial category, evaluators use a BSC-format<br />

place more weight on financial category measures than evaluators using an unformatted scorecard. The second<br />

finding is that when performance markers are added to the scorecards, evaluators use a BSC-format weight measures<br />

in any category containing a performance difference more heavily than evaluators using an unformatted scorecard.<br />

By studying financial performance indices, Sohn et al. (Sohn et al., 2007) proposed a structural equation model<br />

(SEM) to examine the relationship between technology evaluation factors and the financial performance. It can be<br />

411


used for not only for the effective management of the technology credit funds for small and medium enterprises<br />

(SME) but also for evaluating financial performance of SMEs based on the technology evaluation of companies.<br />

Their results showed that the operation ability of manager has the highest direct effect on the finance performance<br />

index (FPI) and the level of technology has the highest indirect effect on the FPI. Knowledge and experience of<br />

manager as well as marketing of technology have positive effect on the FPI. Ocal et al. (Ocal et al., 2007) used<br />

factor analysis to select the financial indicators for evaluating financial trend of Turkish construction industry. They<br />

collected 5 years of data starting from 1997 to 2001 for 28 Istanbul Stock Exchange traded construction companies.<br />

In the factor analysis, there were 25 ratios adopted. According to the values of the correlation matrix, 9 ratios had a<br />

weak correction with the others and could be removed. The results of factor analysis showed that 5 factors would be<br />

extracted (i.e., eigenvalues are larger than 1). They are named as liquidity factor, capital structure and profitability<br />

factor, activity efficiency factor, profit margin and growth factor, and assets structure factor. Wen et al. (Wen et al.,<br />

2008) presented a knowledge-based decision support system for evaluating enterprise performance. The KDSS<br />

system provides not only company’s various financial data query, but also enterprise performance based on<br />

knowledge reasoning. The system integrates a database, a knowledge base, an inference engine, and a model base.<br />

The model in the model base adopts 12 key financial indicates, debt to total assets ratio, permanent capital to fixed<br />

assets ratio, current ratio, quick ratio, accounts receivable turnover, average accounts receivable collection,<br />

inventory turnover ratio, fixed asset turnover ratio, total asset turnover ratio, return on assets, return on equity, and<br />

net profit ratio to assess business performance. Through a rating mechanism, a company can know what position it<br />

locates in its industry. Tseng et al. (Tseng et al., 2009) developed a performance evaluation model for assessing<br />

high-tech manufacturing companies based on a new set of financial and non-financial performance indicators. A<br />

data envelopment analysis (DEA), an analytic hierarchy process (AHP), and a fuzzy multi-criteria decision-making<br />

approach are employed in the model. The data were collected from 5 target companies that produce large-size TFT-<br />

LCD panels. Their results showed that the companies focused on competition performance and financial<br />

performance. The paper also strengths to increase market share, sales growth rate, maintaining steady, and sufficient<br />

upstream materials and supplies, and enhance the ability to obtain critical technology and patents. Lin et al. (Lin et<br />

al.,2005) applied a structural equation model to supply chain quality management and organization performance.<br />

The questionnaire data from both Taiwan and Hong Kong’s supply chain firms were collected. Both data show that<br />

there have direct effects on the relationship between: QM practices and supplier participation, supplier participation<br />

and organization performance, QM practices and supplier selection, supplier participation and supplier selection.<br />

Moreover, the relationship between supplier selection and organizational performance has indirect influences. QM<br />

practice and organizational performance have also indirect effects. Hoque (Hoque, 2004) surveyed and discussed the<br />

impact on performance of two factors, strategy and environmental uncertainty, from 52 samples of manufacturing.<br />

His result shows that management’s strategy choice is positively related to performance remarkably. But there is no<br />

evidence to prove the relationship between environmental uncertainty and performance. For financial analysis, a<br />

better financial company should have 4 abilities: liquidity/debt paying ability, financial structure (stability),<br />

activity/efficiency ability, and profitability (Gibson, 2009; McGuigan et al., 2009; Qcal et al., 2007; Shon et al.,<br />

2007; Tseng et al., 2009; Wen et al., 2008). Other many performance related papers have been published (Wwyer et<br />

al., 2003; Lam, 2004; Lin et al., 2005).<br />

2 Methods<br />

According to literature review, we collected and filtered 28 indices in common use for evaluating business<br />

performance. Among the 28 indices, 18 indices are financial indices and 10 indices are non-financial indices. The<br />

main participants, who filled in the questionnaire, are 34 experts from electronic companies, from academics, and<br />

from accounting department. Through two-phase expert meeting and questionnaires, we collect, discuss and<br />

evaluate all the 28 indictors. The survey adopts a 5-point Likert scale (1=Not important at all, 5=very important).<br />

In the first phase, in order to choose critical indicators, we utilize 3 approaches to reduce the indices from 28 to<br />

18. First, based on total number of “Important” and “Very Important,” shown in questionnaires for financial indices,<br />

the index of which total number is less than or equal to 27 is deleted and for non-financial indices, the index of<br />

which total number is less than or equal to 28 is deleted (see Table 1). After counting the total number of<br />

“Important” and “Very Important,” 10 out of 28 indices are removed. Therefore, times interest earned, debt-toequity,<br />

total asset turnover, fixed asset turnover, gross profit margin, productivity, number of patents, upstream<br />

materials and supplies, downstream tactical alliances are erased.<br />

412


NO<br />

Indictor<br />

Important Very Important Total<br />

1 current ratio 24 6 30<br />

2 quick ratio 21 9 30<br />

3 cash ratio 20 13 33<br />

4 working capital 18 13 31<br />

5 Permanent capital to fixed assets 22 6 28<br />

6 Debt ratio 16 16 32<br />

7 Times interest earned 9 10 19<br />

8 Debt-to-equity 19 8 27<br />

9 Inventory turnover 14 14 28<br />

10 Total asset turnover 15 5 20<br />

11 Accounts receivable turnover 14 16 30<br />

12 Fixed asset turnover 11 5 16<br />

13 Average collection period 16 13 29<br />

14 Return on assets 19 10 29<br />

15 Return on stockholders’ equity 14 16 30<br />

16 Return on investment 20 10 30<br />

17 Net profit margin 16 15 31<br />

18 Gross profit margin 13 13 26<br />

19 Product competitiveness 13 20 33<br />

20 Market share 19 11 30<br />

21 Productivity 20 8 28<br />

22 Product quality level 13 18 31<br />

23 Number of patents 14 25 25<br />

24 R&D expenditure ratio 18 26 26<br />

25 Ability to obtain critical technology 11 33 33<br />

26 Capability to improve manufacturing<br />

processes<br />

19 29 29<br />

27 Upstream materials and supplies 19 27 27<br />

28 Downstream tactical alliances 17 27 27<br />

Table 1 Financial and non-financial indices-total number of important and very important<br />

Second, the study also examines communalities of the 18 indices as criteria to select index. We divided the 18<br />

indices into two groups-financial indices and non-financial indices. The financial group has 13 indices and the nonfinancial<br />

group has 5 indices. For the financial group, in line with the principal component -the values of<br />

communalities, we deleted the indicators, permanent capital to fixed assets and debt ratio of which extraction values<br />

of communalities are lower than 0.2 (see Table 2). Thus, 11 out of 13 indices are chosen as (1) Profitability Abilityreturn<br />

on assets, return on stockholders’ equity, return on investment, and net profit margin, (2) Efficiency Abilityaverage<br />

collection period, accounts receivable turnover, inventory turnover, working capital, and (3) Liquiditycurrent<br />

ratio, quick ratio, and cash ratio. For the non-financial group, none of index’s communalities is lower than<br />

0.2. Thus, all 5 non-financial indices are product competitiveness, market share, product quality level, ability to<br />

improve manufacturing processes.<br />

3 System Implementation<br />

In the second phase, when the 16 critical indices are chosen, we have invited 15 experts from the electronic industry,<br />

academic, and accounting to participate a survey via an AHP electronic questionnaire. Basically, there are 5 pairwise<br />

comparison matrices. The first pair-wise comparison matrix is for business performance, which shows various<br />

abilities including debt paying-efficiency ability, efficiency-profitability ability, debt paying ability, and nonfinancial<br />

ability. The second pair-wise comparison matrix is for debt-efficiency ability consisted of cash ratio,<br />

working capital, inventory turnover, and accounts receivable turnover. The third pair-wise comparison matrix is for<br />

efficiency-profitability ability composed of average collection period, return on assets, return on stockholders’<br />

equity, and return on investment. The fourth pair-wise comparison matrix is debt paying ability including current<br />

413


atio and quick ratio. The fifth pair-wise comparison matrix is for non-financial ability which represents product<br />

competitiveness, market share, product quality level, ability to obtain critical technology, capability to improve<br />

manufacturing processes. Each ability is compared to itself and the rest of others in row on top of the square matrix.<br />

The values (i.e., judgment) of a matrix mean the importance of an item when each item in left column is compared<br />

to the items in the row on top. Therefore, the values for the diagonal of the matrix are 1 because the ability is<br />

compared to itself. For example, number 5 in the (1, 2) or first-row second-column position shows that debt payingefficiency<br />

ability is moderately more important than efficiency-profitability ability. However, the value of 1/2 in the<br />

(2, 3) shows that debt paying ability is minor more than efficiency-profitability ability (see Figure 1 and Figure 2).<br />

Figure 1. AHP electric questionnaire.<br />

414


Figure 2. AHP electric questionnaire (cont.).<br />

The system will check the consistence for every matrix. Hence, once all the five matrices pass the test of<br />

consistence, all average weights are able to be found as shown in Figure 3. In Figure 3, we also list the deciles of<br />

each financial index based on the real value for a company’s financial index which is compared with the average<br />

value of the financial index in the electronic industry. Consequently, the total scores of a company for its business<br />

performance can be calculated.<br />

Figure 3. The total scores of business performance for a company.<br />

415


Figure 4. The broken line graph for business performance for various companies during the recent 2003-2008 years.<br />

Figure 4 presents the broken line graph for business performance for various companies during the recent 2003-<br />

2008 years. Through the diagram, we are easily able to understand whether the business performance of a company<br />

make an improvement or a degeneracy. Furthermore, an artificial neural network model is created for forecasting net<br />

profit of a company. Figure 5 represents an artificial neural network model’s training and test processes. X1, X2, and<br />

X3 present the three input variables in the input layer. Y represents the output variable in the out layer. In other<br />

words, we use the net profit of the first quarter, the net profit of the second quarter, and the net profit of the third<br />

quarter to predict the net profit of the fourth quarter. We set up mean square of error (MSE) is less than 0.001 to end<br />

the training process. Thus, Figure 5 also presents the net profit for the second quarter of fiscal 2010 which is<br />

predicted by using the net profit of the third quarter of fiscal 2009, the net profit of the fourth quarter of fiscal 2009,<br />

and the first quarter of fiscal 2010.<br />

Figure 5. An artificial neural network model for predicting net profit of a manufacturing company of the electronic industry.<br />

4 Conclusions and future work<br />

This paper constructs a business performance evaluation model by using financial and non-financial indices and<br />

constructs a decision support system for providing vital suggestions for top managers for conducting their company.<br />

416


To create the model, we examined more than 40 indices for evaluating business performance in light of literature<br />

review. 28 out of 40 indices are chosen as critical indices. Next, a two-phase expert meeting and questionnaire have<br />

been done to count the total number of “Important” and “Very Important” for each financial and non-financial<br />

index. After that, we also used the principal component -the values of communalities, to select the indictors.<br />

Consequently, 16 out of 28 indices are remained for evaluating business performance. Finally, the principal<br />

component is adopted for running factor analysis. Based on the result of factor analysis, there are four factors named<br />

as (1) Profitability Ability-return on assets, return on stockholders’ equity, return on investment, and net profit<br />

margin, (2) Efficiency Ability- average collection period, accounts receivable turnover, inventory turnover, working<br />

capital, and (3) Liquidity- current ratio, quick ratio, and cash ratio. Additionally, the non-financial 5 indices are<br />

included as the fourth factor. Consequently, there are four factors with 16 indices to process the AHP method to<br />

determine the weight of each index.<br />

Next, a management decision support system is built for providing some managerial suggestions to middle or<br />

top managers for conducting business. In the decision support system, some important information such as financial<br />

information can be collected via online access and an artificial neural network model is created for forecasting net<br />

profit of a company. Additionally, analysis hierarchy process (AHP) is adopted to evaluate business performance.<br />

The system is also able to provide critical knowledge to top managers for knowing what business performance it is<br />

and what position in the electronic industry the company is. Finally the DSS can automatically provide vital private<br />

and internal suggestions to top-level managers for running the company.<br />

5 Acknowledgements<br />

This Research Project was sponsored by NSC:98-2410-H-262-005-MY2, Republic of China.<br />

6 References<br />

S. Dwyer, O. C. Richard & K. Chadwick, “Genderdiversity in management and firm performance: the influence of<br />

growth orientation and organizational culture,” Journal of Business Research, Vol. 56: pp. 1009-1019, 2003.<br />

C. Eddy, M.G. Paula & V.D. Van, “Financial versus non-financial information: The impact of information<br />

organization and presentation in a Balanced Scorecard,” Accounting, Organizations and Society, vol. 35, pp.<br />

565–578, 2010.<br />

C.H. Gibson, Financial reporting and analysis, South-western CENGAGE Learning, 2009.<br />

J. Grafton, A.M. Lillis, & S.K Widener, “The role of performance measurement and evaluation in building<br />

organizational capabilities and performance,” Accounting, Organizations and Society, vol. 35, pp. 689–706,<br />

2010.<br />

Z. Hoque, “A contingency model of the association between strategy, environmental uncertainty and performance<br />

measurement: impact on organizational performance,” International Business Review, vol. 13: pp. 485-502,<br />

2004.<br />

M. Lam, “Neural network techniques for financial performance prediction:integrating fundamental and technical<br />

analysis,” Decision Support systems, vol. 37: pp. 567-581, 2004.<br />

M. J. Lebas, “Performance measurement and performance management,” Internatinal Journal of Production<br />

Economics, 41: pp. 23-35, 1995.<br />

C. Lin, W. S. Chow, C. N. Madu, C. H. Kuei, & P. P. Yu, “A structural equation model of supply chain quality<br />

management and organizational performance,” Int. J. Production Economics, 96, pp. 355-365, 2005.<br />

J.R. McGuigan, W.J. Kretlow, & R.C. Moyer, Contemporary corporate finance. South-western CENGAGE<br />

Learning, 2009.<br />

M. E. Qcal, E. L. Oral, E. Erdis, & G. Vural, “Industry financial ratios-application of factor analysis in Turkish<br />

construction industry,” Building and Environment, vol. 42, pp. 385-392, 2007.<br />

H. Saranga, & R. Moser, “Performance evaluation of purchasing and supply management using value chain DEA<br />

approach,” European Journal of Operational Research, vol. 207, pp. 197–205, 2010.<br />

S. Y. Shon, H. S. Kim, & T. H. Moon, “Predicting the financial performance index of technology fund for SEM<br />

using structural equation model,” Expert Systems with Applications, vol. 32, pp. 890-898, 2007.<br />

F. M. Tseng, Y. J. Chiu, & J. S. Chen, “Measuring business performance in the high-tech manufacturing industry: A<br />

case study of Taiwan’s large-sized TFT-LCD panel companies,” Omega, vol. 37, pp. 686-697, 2009.<br />

W. Wen, Y. H. Chen, & I. C. Chen “A knowledge-based decision support system for measuring enterprise<br />

performance,” Knowledge-based system, vol. 21, pp. 148-163, 2008.<br />

417


APPLYING THE CONCEPT OF FAIR VALUE AT BALANCE SHEET ITEMS. THE CASE OF<br />

ROMANIA<br />

Marinela – Daniela Manea, Valahia University of Targovişte, Romania<br />

Post-doctorate researcher at Alexandru Ioan Cuza” University of Iaşi<br />

Email: marinelamanea7@yahoo.com<br />

Abstract: In the last period, in what assets are concerned, it has been agreed to set certain payments (particularly those regarding<br />

immobilizations) at the estimated value of future flows of incomes provided to entities – recoverable or covering value – rather than at<br />

past expenses expressed at their historical cost, a fact which is synonymous to a considerable radical change in accounting. According<br />

to such a perspective, the objective of accounting is that of going deeper into and analyzing future, precisely for transposing it into<br />

balance sheet items.<br />

Under the current circumstances, which involve the scientific research within the filed, even if the Romanian accounting practice has<br />

registered a significant normative progress, by taking over, sometimes up to the point of identifying itself with them, the concepts and<br />

definitions existing within financial report standards, Romanian norms do no not offer too many alternatives, by preserving their<br />

character of regulations strictly explained, a fact which does not provide the possibility of choosing an accounting remedy or applying<br />

a policy established through a professional judgement correlated with the normative requisites.<br />

The current research focuses on the way that just value concept is applied, by acknowledging that there has not been yet identified a<br />

communication alternative between accounting norms and policies, between the freedom to choose accounting procedures and the<br />

duty to provide relevant credible information to users.<br />

Keywords: fair value, annual financial statement, tangible fixed assets<br />

JEL classification: M 41<br />

1. Introduction:<br />

Internationally, two sets of accounting rules have the supremacy nowadays: the international ones, developed by<br />

ISAB, and the American ones, developed by FASB. IASB has accepted, in its own way, FASB’s rules’ influence by<br />

participating at joint projects with FASB, whose purpose is to limit the differences between IFRS and US GAAP. It<br />

must be noted that fair value’s measurement is one of the convergent issues between the two sets of rules, IASB<br />

having already published international rules of financial reporting, IFRS 1 to IFRS 8. From these, the American<br />

rules’ influence is obvious within IFRS 5 “Nonconcurring assets held for sale and discontinuous operations”,<br />

respectively IFRS 3 “Entities combinations”. ISAB has not yet issued a rule exclusively dedicated to fair value, but<br />

only limited to bringing a few changes to IAS 16 “Tangible properties” and developing IFRS-es requiring the use<br />

of fair value, without detailing how to obtain them. If initially IASB was equidistant from the prescribed evaluation<br />

bases, in recent years, it was requested a stronger reliance on fair value and, therefore, its extensive employment.<br />

International accounting rules are becoming more and more future-oriented, seeking to favor information users’<br />

predictions. Thus IASB once indicated that the fair presentation of the results and of an entity's financial position is<br />

more important than applying a set of rules.<br />

2. Applications of fair value in the discovery of financial statements elements<br />

Representations of the fair value for the recognition of patrimonial items, respectively for later notice can be at least<br />

the following [Deaconu A.] applications:<br />

� coverage of intangible fixed assets with a potential buyer, starting with the second year of holding in entity, at<br />

fair value;<br />

� recognition of tangible/intangible imobilisation depreciated either at the utilisation value or at the net selling<br />

price;<br />

� for the tangible/intangible immobilization made as a contribution to the social capital, reflection in accounting<br />

is realized at their market value;<br />

� immobilized tangible/intangible assets from exchange transactions are evaluated at the market value,<br />

respectively based on the best estimates, where market information is insufficient;<br />

� immobilized tangible/intangible assets from entity combinations are recognized in the patrimony at market<br />

value;<br />

418


� measurement and reflection in accounting of the residual value of tangible/intangible immobilization is realized<br />

at market value;<br />

� tangible assets’ recognition, such as special equipment, rarely traded on active markets at replacement cost –<br />

representation of fair value in accounting;<br />

� both initial and subsequent recognition of tangible immobilization held for sale at the lowest value from book<br />

value and the fair value diminished with the sales’ cost;<br />

� subsequent recognition of property investments, in agreement with the IAS 40 conditions “Property<br />

investments” at market value;<br />

� initial registration of the goods acquired through leasing at the lowest value from the good’s market value and<br />

the updated minimal leasing value;<br />

� initial recognition of long-term debts at their updated, market value;<br />

� the subsequent recognition (at the end of the financial exercise) of the goods stock, products, merchandise etc. is<br />

performed at the net realizable value (market value), precisely for identifying any value losses/reappreciation;<br />

� at the initial recognition (at the harvest time) of the agricultural production stocks, related to the biological<br />

assets, their market value it is used;<br />

� initial recognition of commercial debts related to the current activity’s income, in accordance with the IAS 18<br />

conditions “Income” is obtained at the fair value – updated of the sums which are to be received;<br />

� at the financial exercise’s end, subsequent recognition of debts and commercial obligations denominated in<br />

foreign currencies is achieved at their updated value;<br />

� the debts related to the employee’s benefits are initially recognized at the corresponding value of the provided<br />

service, respectively at an updated value if the contributions are payable within one year after the accounting<br />

period when they have been rendered;<br />

� at the initial recognition, long term liabilities are evaluated at their settlement value (reimbursement value) or at<br />

the updated value;<br />

� subsequent recognition, at the financial year-end of cash denominated in foreign currencies is achieved at the<br />

value obtained from the sums in foreign currency.<br />

3. The application of fair value in Romania<br />

For Romania the fair value is still new. It is difficult for the professional accountants to clarify it on a conceptual<br />

level – as it will be seen from the research performed in this material – but even more problematical is its practical<br />

application. Regarding this matter, there can be noticed an attitude of reticence justified in the academic and<br />

professional environment in connection with the possibility of introducing an accounting system including the fair<br />

value, or, why not, based on the fair value. What could be the excuse for this behavior?<br />

Mainly, the system focused on the fair value as a management representative based on value, is directed towards<br />

maximizing the shareholders and creditors’ fortune. Under the circumstances of the Romanian instable economical<br />

environment, where the bankruptcy phenomenon is manifesting strongly for too many entities, does the concern of<br />

protecting these shareholders and creditors even exist? Since he brought funds as well, will the minority shareholder<br />

be granted protection and will his economical interests be pursued? Is it possible, in the Romanian economy, to<br />

obtain an accounting support guaranteeing to the minority shareholder the profitability corresponding to the invested<br />

capital and the risks he has taken? Here are so many questions that must be answered by the Romanian economical<br />

environment.<br />

Carrying on with the idea of the minority shareholder’s protection, the Romanian accounting should find the<br />

best solution for evaluating the patrimonial items mirrored in the financial situations at the lowest cost from the<br />

historical cost and the fair value. At the time being, in Romania, at least on a conceptual level, a combination of the<br />

two systems has been adopted – historical cost and fair value, with an important significance for the former. Beyond<br />

the conditions of the accounting regulations, putting into practice the system based upon the fair value remains a<br />

necessity rather than an actual fact.<br />

How did this situation come to be, actually? One possible explanation would be that, for the moment, too many<br />

hindrances are restricting the applicability of the concept, such as:<br />

� a profound study and theoretical reflection rather poor for the alternative methods of evaluation in accounting;<br />

419


� the insufficient theoretical development of the fair value concept by not knowing in practice its succesive<br />

modelling stages;<br />

� the scarcity of information on the market, as a result of the imperfect economical conditions;<br />

� the opacity and lack of vision of the professional accountants who are not willing to accept the change of a<br />

familiar, easy to work with evaluation system, with another, more complex one that requires alterations and<br />

estimations difficult to achieve;<br />

� the Romanian present accounting system’s reduced ability to apply evaluation at a fair value, which requires<br />

specialised professionals – usually, the fair value is the evaluators’ attribute.<br />

�<br />

Concerning the last point, only a few entities can afford hiring a professional evaluator, respectively creating a<br />

specialized department because this implies a sustained financial effort. Using outer consultants is not an option for<br />

Romanian companies either, especially during this crisis period. This is why, in our opinion, if professional<br />

accountants would have the necessary expertise, they could successfully shape the fair value, at a minimum cost for<br />

the entity for which they are providing accounting services. Additionally, ensuring the estimation’s objectivity can<br />

be guaranteed through the ethical standards provided by an accredited organism to which that specific professional<br />

accountant has adhered.<br />

3.1 Applications of fair value for stocks<br />

In the case of free of charge stocks – either by finding something extra on the inventory or in the case of receiving<br />

it as a donation – evaluation presupposes the employment of fair value at the entry moment, materialized in the<br />

market price. Although not specifically mentioned Romanian accounting regulations have used, throughout the time,<br />

several names to designate the fair value, like: utility value (Accounting Law, nr.82/1991) and market price or other<br />

value (OMFP nr. 1752/2005). For the assets received through donation, both for the Romanian regulations and for<br />

the international ones, there is a consensus regarding the use of any form of fair value.<br />

For the stocks acquired at a global cost, specific for coupled elements, it is necessary that every good is<br />

individually attributed the aquisition/ production cost, proportional with the market/net realization value, at the tiem<br />

of the evaluation.<br />

To obtain the net realization value of inventories at the end of the financial year, we will take into consideration<br />

the purpose for which they are held, as follows:<br />

� for raw materials – meant for consumption – it is recommended to establish the cost of replacement;<br />

� for finished products, semi-finished goods and merchandise – meant for sale – it is right to determine the market<br />

value and the deduction of the items related to the amount of net realization such as: commercial discounts,<br />

administration and selling expenses to be made, production costs to be made for products in progress.<br />

3.2 Application of fair value for intangible assets<br />

In case of intangible assets, the fair value is shaped in the following situations: in case of a value depreciation, in<br />

case of review, when determining their residual value, at the time of the exchange of assets, in business<br />

combinations.<br />

Impairment of intangible assets is currently inlcuded in the general rules applicable to all assets, relinquishing<br />

the completion required by IAS 38 „Intangible assets”, which stated that the entity should estimate the recoverable<br />

amount at least once a year, even when there were no indications of impairment for assets not available for use,<br />

respectively for those amortized for a period longer than 20 years. In the curent version, for each intangible asset the<br />

entity will compare its recoverable amount with the book value at the end of each financial year, and whenever there<br />

is an indication of a likely loss of value. Also, for assets with indefinite life period IAS 36 the ”Impairment” requires<br />

annual testing, regardless of wether there is any indication of impairment.<br />

Revaluation of intangible assets, as an alternative treatment for subsequent evaluation, goes through the same<br />

steps applicable to tangible assets. Things to be considered: estimating the fair value of the asset, the correction of<br />

accumulated depreciation and accounting of the surplus or decrease value as a result of the re-evaluation. Problems<br />

arise in identifying an active market for such assets beacuse they are either unique elements – such as trademarks,<br />

420


patents, film publishing rights, or although they can be traded, such operations are rare and negotiated irectly<br />

between buyer and seller, and prices are not generally available.<br />

The most appropiate method for revaluation of the intangible fixed assests is the one of the comparison market,<br />

especially for those assets for which there is a potential buyer or a specific market – certain software products,<br />

concessions, certain types of fishing licenses, taxis, production quotas.<br />

For intangible assets estimating the residual value is rarely possible and due to their specific nature – being<br />

dedicated just to the current user, respectively a very long amortization period (20 years and even more) in which it<br />

is used economic. It is usually considered by international normalizers, for intangible assets, a zero residual value.<br />

But there are intangible assets such as software products, with a shorter life period and who can resist morally, in<br />

such way as to provide utility at the end of their useful life. In this cases it is estimated that the residual value is a<br />

fair value determined through the market comparison approach. The starting point in estimating the residual value is<br />

the selling price of similar assets (worked in similar conditions and had a similar life period with the evaluated asset)<br />

minus the estimated sale costs. The value judgement is currently made in relation with the expected age and<br />

condition for the asset at the end of its useful life. In other words, not using changes of price or of other variables<br />

that would develop during the useful life time, but actual values. It should be noted that the estimations are not made<br />

for the present time, but for a future time, which is why the residual value will periodically reshape, depending on<br />

the evolution of market conditions speific to the asset.<br />

For the purchase of intangible assets in return for the surrender of non-monetary assets, combined or not with<br />

monetary asset, is accepted by the international accounting standards fair value measurment. Exceptions to this rule<br />

are the following cases:<br />

� the exchange transaction has no commercial character, wich means that future cash flows generated from the<br />

activity does not change as a result of the exchange transaction;<br />

� the given asset is, in turn, valued at the fair value;<br />

� can not reliably measure the fair value, neither for the received asset, nor for the given asset.<br />

The most appropiate method of measuring the fair value appears to be the market comparisons method [Ristea M.,<br />

Dumitru C., Irimescu A.], but where information about transactions with similar goods are not enough valuation<br />

models are applyed. Also, if you can reliably measure the fair value of both assets subjected to the exchange, it is<br />

advisable o use the fair value of the given asset. Unless it is clear that the fair value of the received asset is more<br />

well-argued, prefer the latter.<br />

The current Romanian norms account for the exchange of assets as a simultaneous sale-purchase operation,<br />

although the final outcome is zero.<br />

For entries in the entity´s assets of intangibles from business combinations, their fair value recognition from the<br />

entry date is applyed. One can estimate the fair value of intangible assets resulting from a business combination, in<br />

most cases, except those assets that are not separable but whose fair value can not be reliably estimated, given the<br />

existence of several variables that can not be evaluated. As the evaluation methodology , is preferable the market<br />

comparison method – when there is an active market and the actual prices are known, or indirect estimation method<br />

based on income. When none of the above-mentioned evaluation techniques do not provide a credible assessment of<br />

intangible costs resulting from a combination of business the asset will not be recognized, but included in the<br />

goodwill.<br />

3.3 Applications of the fair value for tangible assets<br />

In the case of tangible assets the fair value is shaped in the following situations: when noticing value depreciation,<br />

at revaluation, when determining their residual value, when exchanging assets, within leasing contracts.<br />

IFRS treats the the tangible assets’ depreciation through the IAS 36 standard „The assets’ depreciation”,<br />

specifying the necessity of going through the depreciation tests’ steps depending on the internal and external<br />

existing clues available for the entity. When measuring the recoverable value, the entity calculates two values: the<br />

net fair value and the utility value, and then proceeds to the comparison of the two aggregates, choosing, in the end,<br />

the biggest of the two.<br />

421


The net fair value or the net selling price is, in fact, the market value reduced with the cost of sale. For those<br />

productive tangible assets such as machineries and equipment, more difficult to trade in active markets, there must<br />

be carried analogies with recent transactions of the entity for similar assets, compliance with reasons that will take<br />

into account variables such as production capacity, the operating state, the age etc.<br />

If the asset can be traded on an active market, the fair value is determined by reference to the market value of<br />

the tangible asset, the best clue being considered to be the price of a strong sale commitment registered after a<br />

market transaction. The disposal costs are deducted from the market price. As disposal costs there can be mentioned<br />

the legal costs, the notary fees related to the sale, stamp taxes and similar charges, postal charges, costs meant to<br />

ensure the asset’s good condition for sale, travel costs etc.<br />

For the assets traded on the market for which there is no strong sales commitment. The starting point for the fair<br />

value’s measurement is the asset’s market value, by analyzing the recent selling prices (or the sale offers) of fixed<br />

assets identical or similar to the analyzed one, in order to reach an indication about its value. The method is known<br />

in evaluation under the name of sales comparison approach. Because it is usually difficult to find comparable assets<br />

identical to the one involved, there must be applied corrections to the prices of the fixed assets similarly sold to<br />

ensure their comparability on the account of the differences between their essential characteristics called elements of<br />

comparison.<br />

The elements on which the comparisons and adjustments are made are taken from the market and they mirror<br />

what the buyers believe to be the causes for the price differences they are willing to pay.<br />

If comparable assets are superior to the one analyzed in a certain characteristic of theirs, then their price is<br />

corrected downwards. Vice versa, if the prices of these assets are inferior to the analyzed one, then a positive<br />

correction will be made. Ideally, when such configurations and evaluations are made, the conclusions ought to be<br />

based on sales of similar or identical assets (usually similar) which have been traded on the market.<br />

Unfortunately, sales of assets that are identical to the analyzed one are very rare. Practically, the market analysis<br />

will reveal sales of similar and not identical assets [Manea M.], and this represents the similarity analysis on which<br />

the authorized specialist bases his opinion upon value. As comparison elements, we can mention:<br />

� the origin and the actual age (the shaping of the actual age of the comparable will be attempted) ;<br />

� the state (condition): It is known that the differences from the asset’s state (condition) affects the selling price of<br />

the similar assets;<br />

� the capacity: Ideally, the comparable assey should have the same production capacity (or a similar one) with<br />

that of the analyzed asset. On a contrary case, there might be imposed the correction of the comparable asset’s<br />

selling price to reflect the capacity differences.<br />

� the characteristics (accessories): The evaluator specialist should compare the analyzed asset with assets that<br />

show the same characteristics and accessories;<br />

� the location: the geographical location of a comparable asset’s sale can affect the selling price. Additionally,<br />

even an asset’s physical location, within an installation can influence the selling price;<br />

� the producer: Where it is possible, the assesor specialist should perform the comparison of the targetted asset to<br />

the sales of similar assets realised by the same producer;<br />

� the parties’ motivation: this represents an important comparison problem, especially for the big machineries.<br />

The specialist assesors should analyze and understand tboth the buyer’s and the seller’s motivation and in what<br />

way the analyzed asset’s value influences this motivation;<br />

� the price: In many cases, especially for big properties, the transaction’s price should be investigated and<br />

expressed in monetary terms (cash);<br />

� the quality: the comparable asset’s quality should be equivalent to that of the analyzed immobilization.<br />

Otherwise, the specialist should either give up on the comparable asset or perform an adequate correction;<br />

� the quantity: The unit prices can vary considerably depending on the sold quantity, which will also be correlated<br />

with the market conditions: a buyer’s market suggest some important quantity’s availability, while a seller’s<br />

market suggest a limited quantity;<br />

� the selling date: The specialist assessor should obtain pieces of information about the sales recorded within a<br />

reasonable period of time from the analysis’ date. This is important especially in the cases of unstable markets.<br />

Theoretically, comparable sales should be close to the analysis’ date, but they are not always easy to obtain.<br />

When this kind of sales is recorded off the „reasonable” period if time, but they must be taken into account, the<br />

422


specialist should justify this and make the adequate corrections because the pieces of information have a low<br />

degree of interest;<br />

� the sale type: the sale’s type and the terms indicate different price levels belonging to various ways of<br />

commercializing (therefore, the value premises).<br />

Generally, the sales comparison method can be used for any of the value’s premises, for any commercializing<br />

form. If the prices from before the beginning of the transaction would have been announced, the result could be the<br />

market value. If auctions would be considered a basis for comparison, the result could be forced liquidation value.<br />

Sales comparison approach or the modeling technique is not applicable when the asset involved is unique. Even<br />

in the situation in which the asset in not unique, the approach is not applicable unless there is an active market for<br />

that element. An inactive market, or one where there is a limited number of sales comparable with the asset<br />

involved, often show a lack of demand as the existence of economical depreciation: where there is an inactive<br />

market, the analyzed element could more adequately be evaluated through the analogy method in accordance with<br />

the paragraph’s 27 regulations of IAS 36 norm: “The assets’ depreciation”. Consequently, it is possible for the fair<br />

value to be determined even if the asset is not traded on an active market. In these cases, the entity will analyze the<br />

available pieces of information regarding the possible past transactions, with similar assets, for which the market<br />

selling prices are known; likewise, if there are offers for similar assets and the prices are situated at approximately<br />

equal values, there can be made an estimation for the fair value without the selling cost. This way, especially for<br />

those productive tangible assets as machineries and equipment that are more difficult to trade on active markets, it is<br />

necessary to carry out analogies with recent trades of the enterprise regarding similar assets, compliance with<br />

reasoning that will take into account variables as production capacity, operating state, age etc.<br />

Returning to the sales comparison approach, pertinent questions pop up: how is market information obtained and<br />

where is it obtained from?<br />

How market information is obtained: one method of getting market information is contacting the used<br />

equipment’s dealers who are familiar with the type of equipment that is analyzed, in order to find out recent selling<br />

prices or current solicitated prices. It is preferable to make the relation with more than one dealer, in order to assure<br />

a certain information coherence.<br />

Where can one get market information:<br />

� used equipment dealers;<br />

� new equipment dealers;<br />

� sales brochures;<br />

� newspapers commercials;<br />

� private sales;<br />

� auctions.<br />

New and used equipment dealers are good information sources. If a customer bought a mechanical device from a<br />

used equipment dealer, this is a good piece of information. Even more, that dealer can be very cooperative about<br />

providing information. The specialist assessor should ask the customer’s permission to contact the dealer who sold<br />

him the equipment, because the customer could be against communicating the piece of information about the sale,<br />

especially if the dealer has connections with his competitors. The pieces of information obtained from the market<br />

can be uses by the specialist in shaping the analyzed asset’s value; in its simplest form, sales comparison method<br />

representation presupposes the relation:<br />

The immobilized comparable asset’s price +/- corrections = The analyzed fixed asset’s price<br />

For determining the price more techniques and methods can be employed:<br />

� the identification method;<br />

� the assimilation method;<br />

� the cost percentage method.<br />

The identification method. This technique establishes an asset’s value through comparison to an identical<br />

replacement that has a known selling price. A possible example can be the cost for a fork manufacturing lift, with<br />

known model, age, capacity and condition. In this case, specialized guides are usually used.<br />

423


The assimilation method. This technique establishes the value relying on the analysis of some assets that have<br />

key parameters close in size, but not identical (similar assets), using a utility measure (size, capacity) as a<br />

comparison basis.<br />

The cost percentage method. This technique only implies establishing the ratio between the selling price and the<br />

current gross cost of an asset on sale date. With sufficient information, a specialist can make statistical analyses and<br />

he can establish relations that have appeared on the market, between age, selling price (or the demanded one) and a<br />

new one’s price.<br />

Tangible fixed assets revaluation process translates into determining the fair value by the assessment experts.<br />

Romanian rule does not elaborate on what are the methods for obtaining fair value. It just points out that if fair value<br />

can not be determined because of lack of active market, that asset should be recognized at its cost or, where<br />

appropriate, the last reviewed value (less the cumulative value adjustments). As a value type to be expected when<br />

reviewing tangible assets, we believe that the use value should be determined (income approach) or net replacement<br />

cost (cost approach). Only in the event that management intends and argues the sale of property, we can get market<br />

value (market comparison approach).<br />

The residual value as a possible sale price of the asset at the end of his utility period in the entity is thus a<br />

market value. Assessor specialists will quantify the size of the allegedly selling costs operation will be deducted<br />

from the market value. As we can not currently know from the market the price of similar future transactions, in<br />

shaping the market value we will not use the approach method based on market comparisons, but we will use either<br />

the income approach or the cost-based approach.<br />

If the income-based approach can be applied in a credible manner – but customizing the value to the current<br />

operating conditions of the asset and to its current user, but not taking into account the possible acquisition by a<br />

future user and an alternative use, cost-based approach requires modeling of variables difficult to quantify. It is<br />

about the reconsideration of direct and indirect cost components of the fixed asset at the time of its sale – modeling<br />

requires information difficult to quantify and which lead us to the idea that no approach is perfect.<br />

As in the case of intangible fixed assets, the exchange of tangible assets is treated in Romanian accounting as a<br />

sale operation, disregarding the requirement of applying the fair value if the transaction has a commercial nature,<br />

with the modeling option for the fair value of the given asset rather than of the received asset.<br />

In terms of leasing contracts, between the two types of contracts – financial and operational, only the first one<br />

involves news about the applications of fair value.<br />

If operational leasing generates costs to the tenant, respectively incomes to the tenant – both are recognized in<br />

the profit and loss account, in addition the leased asset remains in the lessor´s stock and will be applyed to it all its<br />

assessments tangible fixed assets, financial leasing in return involves several types of estimations included in the<br />

scope of fair value as follows:<br />

� fair value of the asset that is subject to the leasing contract, requiring the estimation of a market value in our<br />

opinion because the transfer of property is only partially achieved;<br />

� modeling a residual value of the asset for the end of the lease, both tenant and the landlord, which is also a<br />

market value for that date adjusted with the accumulated depreciation;<br />

� leasing gross investment consisting of both the tenant and the landlord in the estimation of the minimum lease<br />

payments to receive;<br />

� present value of minimum lease payments obtained by discounting the debt totaling periodic lease lease<br />

payments and the residual value at the end of the lease. To obtain the present value of minimum lease payments<br />

we use the implicit interest rate specific to the contract.<br />

3.4 Applications of fair value for assets held for sale<br />

Until the date of classification as held for sale fixed assets are valued in accordance with their specific standards<br />

applicable at the date of entry into property or during use. From the time of their classification as held for sale, the<br />

424


evaluative criteria is covered by IFRS 5 „Assets held for sale and discontinued operations” given by the minimum<br />

between the net book value and the fair value[IASB].<br />

If since their entry into heritage assets are classified as held for sale the evaluation involves comparison between<br />

the accounting value that it would have received if it had not been classified as such (ie acquisition cost) and the fair<br />

value minus the selling costs. For assets or group of assets acquired during a business fusion, assessment involves<br />

modeling only the net fair value because the book value (its costs) is not relevant at the acquisition date.<br />

In determining the fair net value between the sales costs we identify: transportation expenses, handling,<br />

advertising, sales commissions given to the inermediaries etc. If for the materialization of the sale the one-year<br />

period is exceeded, the costs of sale should be updated. Also, after updating the sales costs we may add other items<br />

of expenditure that could increase that value. Increasing the size of the sales costs will not be recorded as a reduction<br />

in asset value due to operating expenses, but as a financial expense.<br />

3.5 Applications of the fair value for income<br />

For income, fair value correspond to either:<br />

� the fair value of assets/services covered by the transaction, when the entry of the counterparty transaction is not<br />

delayed;<br />

� the fair amount of money – cash or cash equivalent – to be received, for the case the entry of the counterpart<br />

transaction is delayed.<br />

For the case when the counterpart transaction entry is not delayed, the fair value is determined by agreement<br />

between seller and buyer; for the first one, the evaluation basis that determined the sales prices is the net realizable<br />

value, for the last one, the transaction evaluation basis is the historical cost. The two bases correspond to the<br />

situation in wich the asset is sold.<br />

If the counterpart transaction entry is delayed, the fair value is established – the updated value of the amounts to<br />

be received. The present value of revenue is obtained by applying to the initial value a discount rate. This will<br />

penalize the baseline considering the delay of its receipt. For the discount rate the international normes mention a<br />

neutral rate – interest rate, because delaying of the commercial transaction counterparty collection is considered as a<br />

financial transaction.<br />

4. Testing the fair value concept application in Romanian accounting practice<br />

To test how the fair value concept is implemented into Romanian accounting practice we started from the<br />

consideration that shaping it has a great effect on the assets held by the entity and thus our research methodology has<br />

focused on applying the concept in this direction. We proceeded to the achievement of applied research through a<br />

survey of companies in the South-Muntenia region, research materialized in shaping and interpreting the results of a<br />

questionnaire distributed to managers, and professional accountants. It sought to identify those entitiesthat use in<br />

accounting practice the concept of fair value, taking into account the applicable law, the tax limitation that do not<br />

motivate the professional accountants to model two sets of financial statements – one according to the tax provisions<br />

and another taking into account the economic reality and which would result in a true accounting image.<br />

The questionnaire distributed compiles a set of 16 questions and it is listed below:<br />

QUESTIONNAIRE<br />

This questionnaire is aimed at highlighting the practical aspects of applying the concept of fair value to the<br />

recognition of those assets and preparing financial statements. The data you provide is strictly confidential and used<br />

solely for scientific research.<br />

I. Informations about the company<br />

Company name:<br />

Legal form of company: SA, Ltd., SNC, SCS, SCA<br />

Company field of activity:<br />

425


Type of company: JV, SME, MI<br />

II. Financial information<br />

Turnover: > 100.000 EUR<br />

101.000 EUR – 500.000 EUR<br />

500.001 EUR – 3.000.000 EUR<br />

3.000.001 EUR – 5.000.000 EUR<br />

5.000.000 EUR – 7.300.000 EUR<br />

< 7.300.000 EUR<br />

Number of employees: 1 – 9 employees<br />

10 – 30 employees<br />

30 – 50 employees<br />

Over 50 employees<br />

Total assets:<br />

Net tangible assets:<br />

III. Information on applying the concept of fair value in Romanian accounting practice<br />

1. In your company is the fair value concept known by the accounting professionals?<br />

YES<br />

NO<br />

2. Does your company use the contribution of professional accountants in shaping the fair value (or is there a<br />

specialized department for evaluation?)<br />

YES<br />

NO<br />

If the answer is YES, go to question number 3!<br />

3. Who is the decision maker in your company regarding the application of the fair value aggregate in the<br />

evaluation?<br />

4. What goals do you have in mind when determining the fair value of various assets held by the company?<br />

5. Do the accounting professionals from your company have enough working tools in modeling the fair value?<br />

YES<br />

NO<br />

6. Do the accounting professionals in your company know the three modeling approaches of fair value: market<br />

value, utility value, replacement value?<br />

YES<br />

NO<br />

7. In determining fair value for assets does the market information has priority?<br />

YES<br />

NO<br />

8. When you can not find market prices for identical assets do you collect market prices for similar assets, adjusting<br />

them to the existing differences?<br />

YES<br />

NO<br />

9. In the absence of any market information do your professional accountants use, in measuring the fair value,<br />

assessment techniques based on results and costs?<br />

YES<br />

NO<br />

10. In the last three years, was there any fair value modeling for assets hold by your company?<br />

YES<br />

NO<br />

11. Please indicate the fair value quantification method by checking the appropriate boxes:<br />

Market value<br />

Income capitalization method<br />

Discounted cash flow method<br />

426


Depreciated replacement cost method<br />

Other, ie _________________________________________<br />

12. In determining the market value through the modeling technique, the comparable asset price was obtained by<br />

using one of the following methods:<br />

Identification method<br />

Assimilation method<br />

Percentage of cost method<br />

Other, namely _________________________________________<br />

13. The estate assets have been revalued in the last 3 years?<br />

YES<br />

NO<br />

14. Please specify, by checking the appropriate boxes, at whose request was the revaluation of the tangible fixed<br />

assets made?<br />

Bank<br />

Shareholders<br />

State<br />

Others, namely _________________________________________<br />

15. Where the reassessment results recorded in the accounts?<br />

YES<br />

NO<br />

16. Specify what is the reassessment process occurrence on the company´s financial statements?<br />

5. The results’ interpretation<br />

The following conclusions were drawn:<br />

� The Romanian entities, through the practitioner professional accountants are familiar with the international<br />

standards’ regulations regarding the fair value;<br />

� From all the entities that responded to our questionnaire, 90% have applied the fair value’s concept in the<br />

evaluation of various types of assets<br />

� Although the methods of modeling the fair value are not concretely detailed in the Romanian accounting<br />

standards, the Romanian entities can turn to the specialized assessors’ help, who have their own work tools for<br />

its quantification<br />

� In the action taken, approximately 85% of the questioned entities used external assessors in the fair value’s<br />

evaluation process<br />

� Considering the high costs for the sustenance of a specialized evaluation department, the Romanian entities do<br />

not use this solution in over 80% of the cases, preferring the external assessors’ help<br />

� Although the fair value’s modeling techniques are well known, at least theoretically, even by the professional<br />

accountants, there must exist a period of adjustment for the decision factors in the Romanian entities in order to<br />

realize the importance and the necessity of the specialized accountants in modeling this aggregate;<br />

� Although there have been established fair values for the owned assets, these haven’t all been used in the<br />

financial reports, as a consequence of the restrictive fiscal practices that limit the concept’s applicability and<br />

recognition.<br />

6. Instead of conclusions…<br />

Lately, the fair value has become an important concept, defining the current accounting rules, conferring new<br />

qualities of reflecting the economical reality regarding the entity and its assets, also contributing to the increase of<br />

the professional accountant’s role who becomes a veritable consultant for the entity. The Romanian accounting<br />

regulations have recorded a significant progress by taking over, sometimes up to identification, the definitions and<br />

conditions from the international accounting standards, however without realizing conciliation between accounting<br />

rules and accounting policies, between the freedom of choosing accounting procedures and the obligation of<br />

providing the users relevant and reasonable information. Although the application of the fair value’s concept in the<br />

427


Romanian accounting practice represents a novelty, because this is still the expert assessors’ attribute, who have<br />

sufficient tools and modeling techniques, we believe that in the near future the fair value’s measurement will be<br />

attributed to the professional accountant, much better grounded in the entity’s decision process.<br />

7. Acknowledgement<br />

This work was cofinanced from the European Social Fund through Sectorial Operational Programme Human<br />

Resources Development 2007-2013, project number POSDRU/1.5/S/59184“Performance and excellence in<br />

postdoctoral research in Romanian economics science domain.”<br />

8. References:<br />

1. IASB, International Financial Reporting Standards (IFRS), Official rules issued on January 1 st 2009, translation,<br />

CECCAR Publishing, Bucharest, 2009;<br />

2. Deaconu A., The fair value. Accounting concept, Economical Publishing, Bucharest, 2009, chapter 5, pp. 200-<br />

330;<br />

3. Feleagă N., Feleagă L., Evaluation models and regulations in international accounting, Journal of Theoretical and<br />

Applied Economics;<br />

4. Manea M., Measurement and evaluation of depreciation to the assets, Academic Book Publishing, Bucharest,<br />

2007, pp.30-34;<br />

5. Ristea M., Dumitru C., Irimescu A., Manea L., Nichita M., Sahlian D., Policies and accounting treatments of<br />

assets, Economic Tribune Publishing, Bucharest 2007, pp. 142-144;<br />

6. Minister Of Public Finance’s order nr 3.055/2009 for the approval of accouting regulations in conformity with<br />

European Directives, published in the Official Gazette nr 766 bis/10.11.2009<br />

428


THE IMPACT OF THE INTELLECTUAL CAPITAL ON THE COST OF CAPITAL: BRICS 1 CASE 2 .<br />

Elvina R. Bayburina & Alexandra Brainis, National Research University – Higher School of Economics, Russia<br />

Email: elvina.bayburina@gmail.com, abrainis@yahoo.co.uk, http://en.cfcenter.ru/<br />

Abstract. Intellectual capital is an important value driver to form the sustainable corporate performance and reduce stakeholders’<br />

risks. Both companies and capital markets realize that there is a wide range of decisions to be made to improve the control and<br />

reporting systems currently being used. Weighted average cost of capital also plays a great role to access and to regulate the corporate<br />

risk. Although the impact of the Intellectual capital (IC) on the cost of capital is significant and this fact is investigated and accepted,<br />

however there are few researches and investigations on the subject. The research conducted in this paper is quite unique and very<br />

essential, as special evidence to the internal managers and owners of the companies why the growth of IC expenditures can reduce the<br />

cost of capital of the company. The considered time period is 5 years and the considered sample includes companies from BRICS<br />

(Brazil, Russia, India, China, South Africa). It was statistically proved that some of the components and subcomponents of the<br />

Intellectual capital have a significant influence on the corporate cost of capital, as well as the special conditions of the crisis period<br />

and special features of some industries. These findings should definitely help the “insiders” and “outsiders” of the company to create<br />

decision-making schemes with less risky and random strategic solutions.<br />

Key Words: Intellectual capital, financial architecture, innovations, sustainable competitive advantage, emerging markets, BRICS<br />

JEL: G32, G34, G35, L21, L26, M14, M51, M52, O31, O32, O34<br />

1. Introduction<br />

The modern economy has been utterly transformed in recent years. The production notions and means have had to<br />

be totally revised. To survive and to succeed each company now possesses Intellectual capital which must be well<br />

managed and exploited in order to succeed (Stewart, 1997). Intellectual capital is an important value driver in<br />

today’s companies. Traditional financial statements do not provide the relevant information for managers or<br />

investors to understand how their resources – many of which are intangible – create value in the future. Intellectual<br />

capital statements are designed to bridge this gap by providing information about how intellectual resources create<br />

future value. Intellectual statements can be used as tools to communicate over the knowledge-based strategy<br />

externally and it is obvious that it can be significantly helpful in creating internal management decisions. Both firms<br />

and capital markets realize that something has to be done to improve the control and reporting systems currently<br />

being used. The vast majority of the scholars investigating the Intellectual capital argue that the financial reporting<br />

system is incapable of explaining “new” resources such as relationship, internally generated assets, and knowledge.<br />

Disclosing information on such factors is likely to lower the cost of capital because it decreases the uncertainty<br />

about future prospects of a company and provides a more precise valuation of the company (Botosan, 1997). Below<br />

it can be found several reasons why the disclosure on Intellectual capital information is greatly important for the<br />

capital market situation:<br />

1) decreased asymmetry of information, the problem of exploiting the insider information available only for the<br />

internal managers whereas the outstanding shareholders have no idea of its existence. (Lev, 2001);<br />

2) decreased cost of capital (Lev, 2001);<br />

3) increased engagement; the minor shareholders may be encouraged because they get the access to the<br />

information on intangibles which was presented only in the meetings with larger investors (Holland, 2001);<br />

4) decreased volatility and the danger of incorrect valuations of firms, which leads to investors and banks placing<br />

a higher risk level to organizations (Mouritsen et al., 2004).<br />

As it was mentioned above Intellectual capital(IC) leads to decrease the corporate cost of capital. It can be<br />

explained by the positive effect of an open strategy on the better assessment of future wealth creation capabilities<br />

(Burgman & Roos, 2007). Furthermore, “new sources of wealth, not readily identified in the value chain” such as<br />

1 Brazil, Russia, India, China, South Africa. On 24th of December 2010 South Africa received an official invitation to enter BRIC. The entrance<br />

has previously been approved by members of BRIC, however, according to many experts, South Africa is not on the level of other BRICcountries<br />

by some economic measures.<br />

2 The results of the project “Researches of corporate financial decisions of the companies of Russia and other countries with emerging capital<br />

markets under conditions of global transformation of the capital markets and formation of economy of innovative type”, carried out within the<br />

framework of the Programme of Fundamental Studies of the Higher School of Economics in 2011, are presented in this paper.<br />

429


value networks, value shops, and the matching business risks force companies to find alternative ways to disclose<br />

this information in a credible way (Burgman & Roos, 2007). Taking into account the above mentioned it is obvious<br />

that the firms spending more resources on IC enjoy a higher market capitalization and lower effective cost of capital.<br />

But one question still remains unanswered: how to define the Intellectual capital in the financial statements and<br />

public information about the firm? Therefore, it is essential to make a brief overview of the articles dedicated to the<br />

Intellectual capital investigation.<br />

2. Intellectual capital overview<br />

2.1 Background approach<br />

It is difficult to provide precise definitions for intangible assets and IC. Thus, the definitions found in the literature<br />

are decidedly broad. According to Stewart IC can be defined through its three main components: human, structural,<br />

and customer capital (Stewart, 1997). “Human capital is the capacity of individuals to provide solutions for their<br />

customers. Structural capital transforms know-how into the group’s property. Customer capital allows relations with<br />

customers to be perpetuated”. However, a single definition of IC adopted by one company may not generalize to<br />

other companies because IC is closely tied to the industry and the specific company it serves. Other definitions in<br />

the literature include Moore who defines IC as customer capital, innovation capital and organizational capital<br />

(Moore, 1996). A more comprehensive and generic framework is presented by Brooking (Brooking, 1996). This<br />

framework has the following categories of IC:<br />

1) market assets (consisting of service or product brands, backlog, customer loyalty, etc.);<br />

2) intellectual property assets (patents, know-how, trade secrets, etc.);<br />

3) human-centered assets (education, work-related knowledge, vocational qualifications, etc.);<br />

4) infrastructure assets (management philosophy, corporate culture, networking systems, etc.).<br />

Brooking’s (1996) framework was modified by his followers (Guthrie et al., 2003). The updated version of IC<br />

has three categories (components) and 18 subcomponents. The three IC categories are:<br />

(1) internal capital with six subcomponents (e.g. intellectual property);<br />

(2) external capital with seven subcomponents (e.g. brands); and<br />

(3) human capital with five subcomponents (e.g. training).<br />

In addition, it is highly crucial to pay attention to the definition of the Intellectual capital and its components<br />

given by Mouritsen et al., (2004). It is said in his study that the intellectual resources comprise of the firm’s<br />

knowledge. In a business context this knowledge is used to improve a firm’s innovation capability, processes and<br />

performance. Below the authors define four types of knowledge resource, i.e. employees, customers, partners,<br />

virtual infrastructure and technologies.<br />

1) Employees and knowledge resources with inherent attributes such as skills and personal competences,<br />

experience, educations, commitment, or willingness to adapt. Groups of employees produce beneficial emergent<br />

qualities.<br />

2) Knowledge resources based on customers and partners. Especially the relationship to customers, users, and other<br />

partners such as suppliers, their satisfaction and loyalty, their referral to the company, insight into users’ and<br />

customers’ needs and the degree of co-operation with customers and users in product and process development.<br />

3) The virtual infrastructure can knowledge resource as it includes procedures and routines. These can be the<br />

company’s innovation processes and quality procedures, management and control processes and mechanisms for<br />

handling information.<br />

4) Technologies are knowledge based assets as they refer to the technological support of the other three knowledge<br />

resources. Focus is usually on the company’s IT systems (software and hardware) such as Internet, IT intensity, IT<br />

competencies and IT usage.<br />

After the detailed examination of the papers on IC defining it was decided to use the definition and the structure<br />

of the IC components given by Bayburina and Golovko (Bayburina & Golovko, 2008). Intellectual capital is the<br />

aggregation of the key qualitative, intangible by the meaning characteristics, which give an opportunity to have the<br />

430


competitive advantage and provide a distinguishing valuable offer. This definition gives the broadest understanding<br />

of IC because it includes the commonly used term of only intangible assets included in. Thus, Intellectual capital is<br />

a complexity of knowledge, accrued experience of employees and intellectual property. It is important to mention<br />

that not all objects of engineering design, marketing mix tools, customer relationship principles are the objects of the<br />

intellectual property (Bayburina, 2007; Bayburina, Golovko, 2008). To define IC components it was assumed to use<br />

the classification represented in the research papers by Ivashkovskaya and Bayburina, Bayburina, Bayburina and<br />

Golovko (Ivashkovskaya and Bayburina ,2007; Bayburina, 2007; Bayburina & Golovko, 2008,2009,2010). These<br />

components are as follows (the certain subcomponents are enclosed in brackets):<br />

1) Human capital (personnel’s knowledge and skills, their key competences and loyalty);<br />

2) Process capital (company’s infrastructure – IT systems, business processes, operational processes, the plan of<br />

changing the organizational structure, launching and implementation of the basic and auxiliary business processes);<br />

3) Client capital (whole set of the client relationship characterized mostly by the loyalty and also by the activities<br />

directed to improve the relationship, e.g. performing the environmental protection, servicing and settlement of<br />

disputes, personal data protection and confidentiality guarantee etc.);<br />

4) Innovation capital (development and implementation of the new technologies, investments in the fixed assets<br />

etc.);<br />

5) Network capital (aggregation of the company relationships with the outstanding shareholders, such as banks,<br />

investors, creditors, suppliers, government).<br />

2.2. Measuring cost of capital<br />

Weighted average cost of capital also plays a great role in the company’s life. Globally, there two ways of<br />

implementation of the cost of capital theory: to attract more investors and to get as precise calculation of the existing<br />

and future projects as possible. Although the impact of the cost of capital is significant and this fact is broadly<br />

accepted, concerning Intellectual capital there are very few investigations on this theme. Weighted average cost of<br />

capital is a calculation of a cost of capital of the company according to which each source of capital is<br />

proportionately weighted. All capital categories (equity and debt), such as common and preferred stock, bonds and<br />

different types of long-term debt are weighted according to their market value and calculated in WACC. Ceteris<br />

paribus WACC (corporate cost of capital) is to some extent the measure for evaluation of corporate risk. The<br />

WACC equation is the cost of each capital component multiplied by the market valuation of weights and calculated<br />

as follows:<br />

, where:<br />

– Weighted average cost of capital,<br />

– cost of equity,<br />

– cost of long–term debt,<br />

– market value of equity of a company,<br />

–market value of long-term debt of a company,<br />

– market value of the company. The meaning of this variable is calculated as follows: ,<br />

– weight of market value of equity,<br />

– weight of market value of long-term debt,<br />

–corporate tax rate.<br />

Most of the researches are dedicated exactly to the cost of equity. This fact can be explained by the practical<br />

usage of the cost of equity in everyday calculations, whereas the cost of debt remains the same most of the time.<br />

While observing the cost of equity the scholars have come to some interesting conclusions. Botosan (1997), for<br />

example, reports that firms with a low analyst following benefit from lower costs of equity when their disclosure<br />

levels are higher. It was extended and proved after by Botosan and Plumlee in 2002 (Botosan & Plumlee, 2002).<br />

Botosan examines the association between disclosure level and the cost of equity capital by regressing firm-specific<br />

estimates of cost of equity capital on market beta, firm size and a self-constructed measure of disclosure level.<br />

Overall, empirical research generally documents a negative association between public disclosure and the cost of<br />

equity (Frankel et al., 1995; Welker, 1995; Healy et al., 1999; Lang and Lundholm, 2000). There are also some<br />

investigations related to the influence of the other factors, such as market timing (Song, 1993), information<br />

environment (Willis, 1991), intangible investments (Shangguan, 2005), legal institutions and securities regulations<br />

(Hail, Leuz, 2005), corporate social responsibility (El Ghoul, Guedhami et al. 2010) etc. But none of the scholars<br />

431


provides the research on the influence of the IC on the cost of capital, while conducting the similar researches on IC<br />

impacts on the Market Capitalization, Intellectual Enterprise Value (Bayburina & Golovko, 2008, 2009, 2010), etc.<br />

Following Bayburina and Golovko, Intellectual Enterprise Value, IEV, is the difference between Market value of<br />

company’s Equity and the Book value of its Equity (Bayburina, Golovko, 2008). The calculated delta reflects the<br />

excess of the Market capitalization over the Book value, which on the panel data of over 5 years help to eliminate<br />

short-term speculative reactions; this delta also reflects how “the market” (stakeholders and mainly investors)<br />

measures the level of the company’s internal efficiency (Bayburina, Golovko, 2008). By this pattern the value is<br />

generated by its intellectual part. So, Bayburina and Golovko conclude, that the Intellectual value of the company is<br />

a part of the total value, created through the process of the intellectual components’ accumulation (Bayburina,<br />

Golovko, 2008). And moreover this value can be traced by the external stakeholders of the company. The models<br />

developed in this study showed the significant effect of the IC components on the estimating variables (Bayburina,<br />

Golovko, 2009; Abdolmohammadi, 2005; O’Donnell, O’Regan, Coates, 2000; Brennan, Connell, 2000; Petty,<br />

Guthrie, 2000, etc.). But taking into consideration IC impact on the cost of capital it is possible to assume with high<br />

percentage of confidence, that the research conducted in this paper is quite essential, to make the evidence to<br />

managers and owners of the companies why IC expenditures can decrease the cost of capital and the corporate risk.<br />

3. Hypotheses<br />

Authors of the research designed the following hypotheses.<br />

Hypothesis 1. Negative influence of human capital components on the company’s cost of capital. Human capital<br />

refers to the accumulated value of investments in employee training, competence, and future. The term focuses on<br />

the value of what the individual can produce; human capital thus encompasses individual value in an economic<br />

sense. Human capital can be further sub-classified as, the employees’ competence, relationship ability and values.<br />

Employees of the company can be regarded as the most valuable asset, what was fully proved in the time of crisis,<br />

because of the employees’ ability to generate the ideas, keep the knowledge and forecast the internal changes<br />

because of the deepest understanding of the company’s internal environment. As the indicators for the human<br />

capital it was decided to include the number of employees, the personnel expenditures and the usage of assets,<br />

which was calculated as the book value of total assets divided by the number of employees. It was supposed that this<br />

parameter characterizes the efficiency of exploring the assets by the employees of the company.<br />

Hypothesis 2. Negative influence of the process capital components on the company’s cost of capital. As an<br />

indicator for the process capital the operating expenses were taken, because the increase in process expenditures<br />

means to some extent, that the company has a tendency to maintain and improve the operational process, which<br />

directly reflects the risks of company’s bankruptcy and instability.<br />

Hypothesis 3. Direct influence of the innovation capital components on the company’s cost of capital. It is<br />

assumed by the authors that if the company has the opportunity and bears substantial capital expenditures, the<br />

company may have more opportunities and can qualitatively improve its production, intensifies, renews and creates<br />

assets. That’s why the capital expenditures were taken as one of the indicators of the innovation capital. It reflects<br />

the possibility of the company to implement innovations, its direction towards the optimization of the current assets<br />

structure, replacement of old equipment by new more productive one. It shouldn’t be forgotten about the dividend<br />

policy. From the one hand, the increase of dividends usually leads to a decrease of reinvestment in business, from<br />

the other hand dividends are considered to be a signal for the market, however such a signal may have either a<br />

positive, an improvement of the financials, or a negative nature of the influence. Companies with the higher level of<br />

Intellectual capital tend to pay fewer dividends as the signal that they intend to implement responsibilities towards<br />

all stakeholders. In addition the smaller dividend payout is considered as a signal of intentions to invest in the<br />

sustainable development of the company. As the last, but not the least indicator of the innovation capital the R&D<br />

expenditures should be mentioned. This kind of expenditures should be considered not as the current issues, but the<br />

investments in the future improvements, potential and growth.<br />

Hypothesis 4. Significant influence of a particular time period on the company’s cost of capital. In the time of<br />

crisis all the key financial indicators have utterly changed, that crucially affected the indicators of the company’s<br />

sustainability and effectiveness. Considering the panel date from 2005-2009 the impact of the time period on the<br />

network activity and collaboration can hardly be excluded.<br />

432


Hypothesis 5. Significant influence of a particular industry on the company’s cost of capital. This hypothesis is<br />

mostly addressed to the second part of the empirical study where the groups of countries are considered, companies<br />

of BRICS were divided into 2 main groups. Thus, a set of dummy-variable for each industry was developed.<br />

4. Research model<br />

According to all limitations and principles of the analysis the basic research model can be presented as following,<br />

where cost of capital, WACC is the main dependent variable:<br />

(2) WACCit=α+(ρ 1 it,… ρ n it) X IC + (β 1 it ,.….β k it )X FV +εit, where:<br />

WACCit – a vector of Weighted average cost of capital, calculated according to the formula (1),<br />

IC – a vector of Intellectual capital subcomponents;<br />

FV – a vector of fundamental variables;<br />

ε – a vector of random errors (“white noise”);<br />

I – BRICS company index;<br />

T – a year index;<br />

N – a particular Intellectual capital component index;<br />

K – a particular fundamental variable index.<br />

4.1 Research model: fundamentals<br />

Fundamental variables will also be included into the model as control variables, the combination of such<br />

variables in the model will be defined during the research process. The research model includes the following<br />

fundamental factors:<br />

� Total revenues of the goods sold less adjustment on returns, discounts, insurance payouts, tax on sales, value<br />

added tax (Sales)<br />

� Book value of total assets (BVA)<br />

� Return on book value of total assets (ROA)<br />

� Return on book value of equity (ROE)<br />

� Net income (NI)<br />

� Operating income(OI)<br />

� Earnings before interest, taxation, depreciation and amortization (EBITDA). The meaning of this variable is<br />

calculated as follows: EBITDA=Operating income + D&A, where:<br />

Operating income – income from the company’s operating activity;<br />

D&A – depreciation and amortization.<br />

� Book value of equity (BVE)<br />

� Market value of equity, market capitalization (MC)<br />

� Book leverage (Lev). The meaning of this variable is calculated as follows: Lev = Book value of total debt<br />

/BVE<br />

� Intellectual value distress (IVD). The meaning of this variable is calculated as follows: IVD = BVE / MC<br />

� Size (LnBVA). The meaning of this variable is calculated as follows: Natural logarithm of BVA (lnBVA)<br />

4.2 Research model: IC components and subcomponents<br />

� Human capital<br />

o Personnel expenses (PE)<br />

o Number of employees (NE)<br />

o BVA /NE<br />

� Process capital<br />

o Operating expenses (OE)<br />

� Innovation capital<br />

o Capital expenditures (Capex)<br />

o Dividends paid (DP)<br />

433


o Research &Development expenses (lnR&D). The meaning of this variable is calculated as follows: Natural<br />

logarithm of Research & Development expenses<br />

4.3 Research model: IC components and subcomponents<br />

The influence of a particular time period was included into the model. The main distinctive feature is that the<br />

networks capital is accumulated mostly by the interactions of the company with the external business environment.<br />

Taking into consideration the network capital it is worth to mention that the specific of the world economy stage<br />

influence the degree of stakeholders’ activity. Specific and intensity of the business network collaboration, M&A<br />

type and frequency depend on each particular investigated year. The influence of the economic upturn business<br />

activity, 2005-2007, is connected close to the possibility of creating value networks among companies. The period<br />

of downturn and compressing of the business activity in 2008-2009, features of each industry on the development of<br />

the company, features of each particular country on the development of the company were included into the model<br />

to investigate. A set of corresponding dummy variables, which are proxy variables for estimation of each year<br />

influence, was included into the model.<br />

� D 05 – equals “1”, if the year is 2005 & “0” otherwise;<br />

� D 06 – equals “1”, if the year is 2006 & “0” otherwise;<br />

� D 07 – equals “1”, if the year is 2007 & “0” otherwise;<br />

� D 08 – equals “1”, if the year is 2008 & “0” otherwise;<br />

� D 09 – equals “1”, if the year is 2009 & “0” otherwise;<br />

A set of corresponding dummy variables, which reflect industry influence, was included into the model.<br />

� DBM – equals “1”, if the industry is «Basic materials», and “0” otherwise;<br />

� DTel – equals “1”, if the industry is «Telecommunications», and “0” otherwise;<br />

� DConsG – equals “1”, if the industry is «Consumer goods», and “0”otherwise;<br />

� DConsS – equals “1”, if the industry is «Consumer services», and “0” otherwise;<br />

� DH&C – equals “1”, if the industry is «Health & Care», and “0” otherwise;<br />

� DO&G – equals “1”, if the industry is «Oil & Gas», and “0” otherwise;<br />

� DFin – equals “1”, if the industry is «Financial», and “0” otherwise;<br />

� DInd – equals “1”, if the industry is «Industrial», and “0” otherwise;<br />

� DTech – equals “1 if the industry is «Technology», and “0” otherwise;<br />

� DUt – equals “1”, if the industry is «Utilities», and “0” otherwise;<br />

A set of corresponding dummy variables, which reflect industry influence, was included into the model.<br />

� D BZ – equals “1”, if the country is «Brazil», and “0” otherwise;<br />

� D RUS – equals “1, if the country is «Russia», and “0” otherwise;<br />

� D IN – equals “1”, if the country is «India», and “0”otherwise;<br />

� D CH – equals “1”, if the country is «China», and “0” otherwise;<br />

� D SAR – equals “1”, if the country is «South Africa», and “0” otherwise;<br />

Table 1 provides hypothetical signs of connection between the dependent variable and independent variables.<br />

Independent variables Hypothetical sign of connection with the dependent variable<br />

Sales -<br />

Book value of total assets -<br />

Net assets -<br />

Net income -<br />

Return on book value of total assets -<br />

Return on book value of equity -<br />

Operating income -<br />

Book value of equity -<br />

Market value of equity -<br />

Book leverage +<br />

EBITDA -<br />

Intellectual value distress -<br />

Size -<br />

Personnel expenses -<br />

434


Number of employees -<br />

Book value of total assets/Number of employees -<br />

Operating expenses -<br />

Capital expenditures +<br />

Dividends paid +<br />

Research & Development expenses -<br />

Time influence Significant<br />

Industry influence Significant<br />

4.4 Sample and source data.<br />

The research conducted in this study is dedicated only to the BRICS companies because it is a widespread opinion<br />

that the BRICS countries are the most interesting in terms of sample of emerging markets to investigate. The term<br />

“BRIC” was launched by the economist from Goldman Sachs Investment Bank Jim O’Neill in 2003. After 2011<br />

year conference in Davos, it was proposed to include South Africa in BRIC emerging markets group, because it<br />

reflects all the issues related to Brazil, Russia, India and China. The BRIC countries as Brazil, Russia, India and<br />

China are frequently referred to have attracted media and academic attention in the recent years. These countries are<br />

different from one another over their culture, background, language, and the structure of their economies. However,<br />

they have a common denominator: economic growth development in the BRICS has greatly exceeded growth<br />

compared to the world's leading industrialized nations. Even after the economic crisis that started in 2007, they<br />

continued outperforming the rest of the world. While in 2009 large economies shrunk as much as 6%, (e.g., Japan<br />

and Germany), Brazil stayed steady, India grew 5.9%, and China 8.1%; only Russia was the group's bad performer<br />

shrinking 7%. One more common characteristic is that the financial assets of these 5 countries are underestimated<br />

and have a great potential of growth. The forecast of their future performance will further increase the interest on<br />

these countries and also gives the reason to pick exactly this set of countries to develop the research in this study.<br />

While all of BRICS countries are associated with the most fast developing markets, the potential of each country is<br />

estimated as being at the different levels. After the crisis the scholars have divided the scope of countries into 2 parts<br />

– “the source economies” (Brazil, Russia and South Africa) and “the pushovers” (India, China). That’s why, in this<br />

paper it was decided to develop three models: for the 5 countries and for each group separately. Taking into account<br />

the necessity to assess companies from different countries the final research sample was formed along with the<br />

following criteria:<br />

a) The sample is formed from BRICS companies listed from the year 2005 till 2010.<br />

b) The sample is formed from BRICS companies about which there is enough disclosed data to calculate the cost<br />

of capital, WACC.<br />

c) The selection of such time period is caused by the length of Intellectual capital accumulation and, therefore,<br />

the necessity to analyze panel data for not less than 5 years (Bayburina, Golovko, 2009).<br />

d) Such research is of special interest, owing to the fact, that, from one hand, BRICS countries are leaders of<br />

emerging economies, though, at the same time, they differ dramatically from each other. It should be mentioned,<br />

that the information used in the study should be available in the open sources (e.g. annual reports or other<br />

information on websites), thus making it useful and acceptable to trace for a wide range of users. In order to ensure<br />

comparability, companies should have financial reports along with the National GAAP accounting standards. As the<br />

results of previous studies show (Bayburina, Golovko, 2009), the number of companies, especially from India and<br />

Brazil, which have IFRS financial reports in the period earlier than 2005, is not quite high and is not sufficient<br />

enough for the research purposes, thus it is necessary to impose restrictions on usage of such type of reports in the<br />

research.<br />

e) For the purpose of comparability, the limitations on the companies’ capitalization prescribe to choose<br />

companies with market capitalization in 2010 not less than $200mln.<br />

f) The sample is formed from BRICS companies about which there is enough disclosed data to calculate the cost<br />

of capital, WACC, according to the formula (1).<br />

Statistical information for the research was collected from the Bloomberg data source and corporate web-sites.<br />

Along with all the criteria being met the final sample was formed and it includes 273 companies from BRICS<br />

countries, representing companies from 10 industries. To reach the main goal of the research the special research<br />

model has been introduced and a series of linear regression tests has been held. The empirical panel data research<br />

was fulfilled with the use of regression analysis software tools (program packages Stata 11.0, Eviews 7.0, Microsoft<br />

Excel 2010).<br />

435


5. Results<br />

5.1 Data analysis<br />

Test for normality of variables distribution, to fulfill the task a corresponding test for normality was held (Skewness-<br />

Kurtosis Test for Normality.Data verification was held by the means of correlation analysis, special tests for<br />

multicollinearity. Various types of OLS regression models were tested and VIF-tests for two samples were held.<br />

Distinct models were chosen with meanings of VIF-tests not greater than critical levels, 10 for individual VIF<br />

meanings and 6 for average meanings for group of factors according to the Stata criteria and thereon the final model<br />

of the research was chosen. The meaning of VIF contributed to 3,13 for the chosen set of variables in the sample for<br />

four countries. All variables of subsample “Brazil-Russia-South Africa” are normally distributed. The meaning of<br />

VIF contributed to 1,27 for the chosen set of variables in subsample. All variables of subsample “India-China” are<br />

normally distributed. The meaning of VIF contributed to 1,39 for the chosen set of variables in subsample.<br />

5.2 Evaluation<br />

In order to evaluate the influence of each independent factor a series of linear regression tests has been held for each<br />

determined sample: BRIC, Brazil-Russia-South Africa, India-China (3 parts of empirical investigation). As the final<br />

model of the research the authors of this article have chosen the model in which all the factors are significant (at no<br />

less than 5% level of significance).<br />

I. For the chosen BRIC models specification tests were held, tests for model specification selection which reflects<br />

temporal structure of the data available. The authors have carried out the Wald Test, Breusch-Pagan Test, Hausman<br />

Test. Wald test showed that the Pooled-up model is rejected compared to the Fixed Effect model. Breusch-Pagan<br />

test showed that Pooled-up model is rejected compared to the Random Effect Model. Hausman test showed that the<br />

Random Effect model is rejected compared to the Fixed Effect Model.<br />

According to the results of the tests the following model with the Fixed Effect was chosen for the BRIC sample.<br />

According to the results the Fixed Effect model of the regression has been chosen. So the basic criterion was the<br />

highest value R2within=0,5304. The final model with the Fixed Effects for the BRIC sample is presented below:<br />

Test Statistics<br />

Wald test F test that all u_i=0; F(263, 859)=3,64 Prob>F=0,0000<br />

Breuch-Pagan test chi2=96,40 Prob>chi2=0,0000<br />

Hausman test chi2=44,68 Prob>chi2=0,0000<br />

Table 2: BRIC model specification<br />

The subcomponents of human capital such as book value of total assets divided by the number of employees<br />

and the subcomponents of innovation capital such as Capex and R&D are significant. Taking the absolute meaning<br />

of Capex as indicator will lead us to the negative impact on the company’s cost of capital, what will confirm the<br />

authors’ expectations. Year 2008 and 2009 can be considered as the years of the economic downturn especially<br />

concerning the emerging markets. The influence of the 2008 and 2009 years is significant and positive in the model.<br />

The book value of assets is the significant fundamental variable, but its “weight” is quite small. The same can be<br />

said about the equity variable but its influence is positive. It can be explained by the thoughts that the owners of the<br />

company prefer to raise the rate of the cost of equity while enlarging the equity capital, because of increasing risks<br />

and the capital “erosion”. The long-term data analysis gives the opportunity to eliminate speculative value<br />

fluctuations. Accordingly the accumulation of the Intellectual capital is the time-demanding process: the<br />

performance should be evaluated over the long-run horizon. The meaning of the constant in final BRIC model is<br />

positive, that means that in general WACC of the BRIC companies was increasing over the investigated period<br />

2005-2009, which is probably due to the unstable economic situation those years.<br />

II. For the chosen Brazil-Russia-South Africa models specification tests were held, tests for model specification<br />

selection which reflects temporal structure of the data available. The authors have carried out the Wald Test,<br />

Breusch-Pagan Test, Hausman Test. Wald test showed that the Pooled-up model is rejected compared to the Fixed<br />

436


Effect model. Breusch-Pagan test showed that Pooled-up model is rejected compared to the Random Effect Model.<br />

Hausman test showed that the Fixed Effect model is not rejected compared to the Random Effect Model.<br />

The sample of companies in the research is closer to the general (universal) set than the fixed set of data, the<br />

Random Effect model is more appropriate than the Fixed Effect model. According to the results of the tests the<br />

following model with the Random Effect was chosen for the Brazil-Russia-South Africa sample.<br />

Test Statistics<br />

Wald test F test that all u_i=0; F(120, 300)=2,06 Prob>F=0,0000<br />

Breuch-Pagan test chi2=16,16 Prob>chi2=0,0000<br />

Hausman test chi2=8,79 Prob>chi2=0,2684<br />

Table 3: Brazil-Russia-South Africa model specification<br />

For the subsample Brazilian, Russian and South African companies the final model with Random effect was<br />

chosen, Wald statistics is acceptable (502,1). The final model with the Random Effect for Brazilian, Russian and<br />

South African sample is presented below:<br />

In this sample the personnel expenses became significant; the industry of Oil & Gas has a large “weight”, what<br />

is very logical, due to the specialty of the data. The role of the crisis period remains the same (positive influence),<br />

while among the fundamentals there are some changes – the Intellectual value distress and the leverage become<br />

significant with a comparable coefficient.<br />

III. For the chosen India-China models specification tests were held, tests for model specification selection which<br />

reflects temporal structure of the data available. The authors have carried out the Wald Test, Breusch-Pagan Test,<br />

Hausman Test. Wald test showed that the Pooled-up model is rejected compared to the Fixed Effect model. Breusch-<br />

Pagan test showed that Pooled-up model is rejected compared to the Random Effect Model. Hausman test showed<br />

that the Fixed Effect model is rejected compared to the Random Effect Model.<br />

Test Statistics<br />

Wald test F test that all u_i=0; F(93, 247)=2,38 Prob>F=0,0000<br />

Breuch-Pagan test chi2=32,53 Prob>chi2=0,0000<br />

Hausman test chi2=7,98 Prob>chi2=0,2397<br />

Table 4: India-China model specification<br />

The final model with Random Effect was chosen for the second subsample (India-China). The final criteria is<br />

Wald statistics (361,54). The final model with the Random Effect for Indian and Chinese sample is presented below:<br />

6. Conclusion.<br />

The main goal of this research is to evaluate by the means of the panel data analysis the influence of the particular<br />

components of the Intellectual capital on the company’s cost of equity. Due to the fact that the Intellectual capital is<br />

the process of accumulation through a long period of time it was decided to consider a 5-year time period and use<br />

the panel data analysis. Conducting the research the authors resulted in the proof that the components of the<br />

Intellectual capital are rather significant for the company’s sustainable existence. Human and innovation capital can<br />

be considered as the key factors reflecting the company’s cost of capital both for the BRICS sample and for the<br />

“group” samples. Talking about the BRICS countries it is crucial to emphasize that the influence of the human<br />

capital (Personnel expenses per the number of employees) and some of the components of the innovation capital<br />

(R&D expenses) is larger than the impact of such fundamentals as Total assets and Equity. There is no doubt that the<br />

employees of the company and their knowledge and skills are the basis for the company’s effectiveness. It was<br />

proved by the recent crisis: the benchmark companies in all industries made their best to support their staff, to create<br />

the benefit programs and motivation schemes to keep employees loyal. Also during the crisis most of the key<br />

financials lost their trust and value, which caused the abrupt search among the investors the other indicators to make<br />

437


making decisions less risky and random. All of the above mentioned lead to the fact that the Intellectual capital has<br />

becoming more and more popular among both the outsiders and the insiders of the company. The significant<br />

influence of the innovation capital can be explained by the following conclusions. The influence of the capital<br />

expenditures means that the company has the opportunity and bears substantial capital expenditures; the company<br />

may have more opportunities and qualitatively improve its production. It is some kind of a signal that the company<br />

has the possibility to implement the innovations and tends to optimize the current asset structure by replacing the old<br />

equipment by the new one. Research and development expenses are the synonym for the long-term basis for the<br />

company’s future development and therefore future decrease in the cost of capital.<br />

There is no surprise that some of the years turned to have a significant influence on the company’s cost of<br />

capital. This research was conducted using the data from 2005 to 2009 what includes both the period of stabilization<br />

and the crisis. It was shown by the regression that the years 2008 and 2009 negatively impact on the cost of capital<br />

and it is not surprising due to the fact that the economy of all the countries till 2007 was overheated and the crisis<br />

destroyed all the development programs and forecasts. The found influence of the particular industries is rather<br />

logical because of the specific division of the countries into two groups (and companies correspondingly) – the<br />

source economies and the pushovers. The first group felt sensitive to the Oil & Gas industry and the second – the<br />

industry of Basic Materials and Consumer goods. To sum up, from this research it became clear to the authors that<br />

the Intellectual capital components play a great role in the company’s sustainable development and effective<br />

existence. By increasing expenses on the human resources and innovation capital of the company probably enjoy<br />

much higher financial indicators and will be able to focus on the long-term macroeconomic strategies.<br />

7. References<br />

Abdolmohammadi, M. (2005) Intellectual capital disclosure and market capitalization, Journal of Intellectual<br />

capital, Vol. 6(3), 397-416.<br />

Bayburina, E. (2007) Intellectual Capital Investigation Techniques as the Key Trigger of the Sustainable Long-term<br />

Company Development, Journal Corporate Finance (e-journal), Vol. 3, 85-101 (In Russian).<br />

Bayburina, E. & Ivashkovskaya, I. (2007) Role of the Intellectual Capital in the Value creation Process of the Large<br />

Russian Companies, The bulletin of Finance Academy, Vol. 4(44), 53-62 (In Russian).<br />

Bayburina, E. & Golovko T. (2008) The Empirical Research of the Intellectual Value of the Large Russian<br />

Companies and the Factors of its Growth, Journal of Corporate Finance (e-journal), Vol. 2(6), 5-19 (In<br />

Russian).<br />

Bayburina, E. & Golovko, T. (2009) Design of Sustainable Development: Intellectual Value of Large BRIC<br />

Companies and Factors of their Growth, Journal of Knowledge Management (e-journal),Vol.7 (5), 535-558<br />

Bayburina, E & Golovko, T.V. (2010) Synergetic effects of Intellectual capital: “hidden” reserves for innovation<br />

development of BRIC large companies, Proceedings of the 7th International Conference on Applied Financial<br />

Economics, 169-178.<br />

Botosan, C. (1997), Disclosure level and the cost of equity capital, Accounting Review, Vol. 72(3), 323-49.<br />

Botosan, C. & Plumlee, M. (2002) A re-examination of disclosure level and the expected cost of equity capital,<br />

Journal of Accounting Research, Vol. 40(1), 21-40.<br />

Brennan, N. & Connell, B. (2000) Intellectual capital: current issues and policy implications, Journal of Intellectual<br />

Capital, Vol. 1(3), 206-240.<br />

Brooking, A. (1996) Intellectual Capital, International Thompson Business Press, Boston, MA.<br />

Burgman, R. & Roos, G. (2007) The importance of intellectual capital reporting: evidence and implications, Journal<br />

of Intellectual Capital, Vol. 8(1), 7-51.<br />

El Ghoul, S., Guedhami, O., et al. (2010) Does corporate social responsibility affect the cost of capital, Journal of<br />

Banking and Finance, 1-19.<br />

Frankel, R., McNichols, M., Wilson, G. (1995) Discretionary disclosure and external financing, Accounting Review<br />

(January), 135-150.<br />

438


Guthrie, J., Petty, R., Yongvanich, K., Ricceri, F. (2003), Intellectual capital reporting: content approaches to data<br />

collection, paper presented at Performance Measurement Association Intellectual Capital Symposium,<br />

Cranfield, October 1-2.<br />

Hail, L., Leuz, C. (2006) International differences in the cost of capital: Do legal institutions and securities<br />

regulation matter? Journal of Accounting Research, Vol. 44(3), 485-531.<br />

Healy, P., Palepu K. (1993) The effect of firms' financial disclosure strategies on stock prices. Accounting Horizons<br />

(March), 1-11.<br />

Lang, M., Lundholm, R. (1993) Cross-sectional determinants of analyst ratings of corporate disclosures, Journal of<br />

Accounting Research 31, 246-271.<br />

Lev, B. (2001), Intangibles, Management, Measurement and Reporting, Brookings Institution, Washington, DC.<br />

Moore, N.G. (1996), Measuring corporate IQ, Chief Executive, November, 36-9.<br />

O’Donnell, D., O’Regan, P., Coates, B. (2000) Intellectual capital: a Habermasion introduction, Journal of<br />

Intellectual capital, Vol. 1(2), 187-200.<br />

Petty, R., Guthrie, J. (2000) Intellectual capital literature review: measurement, reporting and management, Journal<br />

of Intellectual Capital, Vol. 1(2), 155-176.<br />

Shangguan, Z. (2005) Intangible investments and the cost of equity capital, Dissertation, University of Connecticut.<br />

Song, K. (1993) Market timing and the cost of capital of the firm, Dissertation, Louisiana State University.<br />

Stewart, T.A. (1997), Intellectual Capital: The New Wealth of Organizations, Currency-Doubleday, New York, NY.<br />

Willis, V. (1991) Analyst following, cost of equity capital and the information environment: evidence from the<br />

electric utility industry, Dissertation, University of Colorado at Boulder<br />

Welker, M. (1995) Disclosure policy, information asymmetry and liquidity in equity markets, Contemporary<br />

Accounting Research (Spring), 801-828.<br />

439


BRAND VALUE DRIVERS OF THE LARGEST BRICS COMPANIES 1<br />

Elvina R. Bayburina & Nataliya Chernova, National Research University – Higher School of Economics, Russia<br />

Email: elvina.bayburina@gmail.com, nataliya.s.chernova@gmail.com, http://en.cfcenter.ru/<br />

Abstract.<br />

The world financial crisis emphasized the problem of the brand valuation, booming brand expectations were ruined by the world<br />

economic transformations. The brand valuation critics predominantly from academics and business actors cover both classical<br />

financial and modern approaches towards valuation. Financial crisis made the liquidity and stable cash flows as the main current goal<br />

to achieve of major companies of developed and emerging markets. The obvious contradiction between current values and<br />

fundamental drivers, which are unique in the process of value creation form the research ground. Thus, the main research discussion<br />

covers the question whether it is necessary to consider the fundamental, brand value drivers, which are different from financials and to<br />

use the complicated analytical tools, whether the value of the brand is the fundamental or temporal corporate value driver. The<br />

framework of the empirical research is based on the stakeholders’ theory and the intellectual capital concept. The authors investigate<br />

the influence of brand on the corporate value within the companies of the BRICS region. This research is extremely urgent and has a<br />

great importance because of the lack of brand’s concept investigation, including corporate finance. There is a little attention paying to<br />

this issue, especially in the case of developing capital market countries. Over the last years, more and more companies have realized<br />

the significance and the need of a strategic management of their brand portfolio. This is valid for all economic sectors: for fast moving<br />

consumer goods as well as for industrial goods. “In accordance with a survey conducted by Pricewaterhouse Coopers and Sattler<br />

(2001) on German brand products across all categories, the average percentage amounts to 56 %, for manufacturers of fast moving<br />

consumer goods the figure is even higher and comes up to 62 % of the total company value.” (Sattler, Hogl, Hupp, 2002). The<br />

methods of brand evaluation have essentially increased over the global market, as a well-focused brand management demands an<br />

outstanding brand controlling.<br />

The data sample is formed from BRICS 2 companies. The first reason of such a choice is the increased importance of these countries on<br />

the global market. As Goldman Sachs has argued, since the Brazil, Russia, India and China are developing rapidly; by 2050 their<br />

combined economies could exceed the combined economies of developed countries. The second reason is related to their original<br />

difference from the developed countries. It is commonly known that these four countries have specific economic, political, social and<br />

legislative structures. Therefore some laws, existing in the developed countries have no implementation there. This research provides<br />

the information, which can help to operate better the brand portfolio for top-managers in the range of industries, to take part in buyouts<br />

and mergers, to reshape the investment activity in more optimal way that is of great importance especially for BRICS companies.<br />

Keywords: emerging markets, economic crisis, value drivers, intangibles, valuation, brand, brand value, branding, marketing,<br />

intellectual capital, client capital, BRICS<br />

JEL: L14, L24, G34, D85<br />

1. Introduction<br />

The world financial crisis emphasized the problem of the brand valuation, booming brand expectations were ruined<br />

by the world economic transformations. The brand valuation critics predominantly from academics and business<br />

actors cover both classical financial and modern approaches towards valuation. Financial crisis made the liquidity<br />

and stable cash flows as the main current goal to achieve of major companies of developed and emerging markets.<br />

The obvious contradiction between current values and fundamental drivers, which are unique in the process of value<br />

creation form the research ground. For example, the acquisition of brand “Skype” was so promising for its buyer<br />

“Ebay”, however a lot was omitted due to the obvious limitations of the solely financial approach to the brand<br />

valuation, later after the acquisition the overall management results were very far away from desired goals. It should<br />

be noted that still, those business actors who tend to survive and stand the economic crisis continued to run the value<br />

of their brands on the basis of its internal fundamental drivers such as marketing and client capital, organizational<br />

capital (innovations and technology), qualified personnel and human capital (Bayburina, Levkin, 2010). Thus, the<br />

main research discussion covers the question whether it is necessary to consider the fundamental value drivers,<br />

which are different from financials and to use the complicated analytical tools, whether the value of the brand is the<br />

fundamental or temporal corporate value driver. The framework of the empirical research is based on the<br />

stakeholders’ theory and the intellectual capital concept. However, looking through the scientific literature dealing<br />

1 The results of the project “Researches of corporate financial decisions of the companies of Russia and other countries with emerging capital<br />

markets under conditions of global transformation of the capital markets and formation of economy of innovative type”, carried out within the<br />

framework of the Programme of Fundamental Studies of the Higher School of Economics in 2011, are presented in this paper.<br />

2 Brazil, Russia, India, China, South Africa. On 24th of December 2010 South Africa received an official invitation to enter BRIC. The entrance<br />

has previously been approved by members of BRIC, however, according to many experts, South Africa is not on the level of other BRICcountries<br />

by some economic measures.<br />

440


with the brand evaluation subject, it turns out that there is not until today a valid and comprehensible model. So the<br />

main methodological purpose of this paper is to build the model, which will be useful for estimating the real brand<br />

value and for following empirical researching of brand value. Therefore, this research paper has several objectives.<br />

The first objective is to consider the existent brand valuations, based on different perspectives of the brand equity.<br />

The review of the literature allows extracting the necessary approach. The second objective is to formulate, to<br />

modify, the methodology of proposed research. Afterwards, the paper presents the main results of empirical research<br />

and presents the main findings.<br />

2. Brand value perspective<br />

Tangible assets have stopped to be the general driver for corporate value growth and competitive advantage because<br />

of globalization, technological and production development, which have intensified and have given new<br />

opportunities for companies from emerging markets to succeed on the field of intangible assets. Keller (2007), who<br />

provides the summary of the literature, lists a lot of financial benefits to build a strong brand and reshape the<br />

marketing strategy towards it, that includes for sure greater customer loyalty, larger profits, less vulnerability;<br />

competitive marketing programs and plans help to dispatch the consequences of economic crises, to originate less<br />

elastic consumer response to price increases and more elastic consumer responses to price decreases, greater trade<br />

cooperation and support, increased marketing communication effectiveness, increased licensing opportunities, and a<br />

lot of additional brand extension opportunities. As a well-focused brand management demands an outstanding<br />

brand controlling, the brand evaluations methods are to be used along with the implementation of a strategic brand<br />

management throughout within the company, has essentially increased in the global market. Many authors devote<br />

their investigation and research efforts to the concept of brand value, trying to measure the brand value and to give<br />

right recommendations in terms of creating and building a brand. Nevertheless, companies as well as consultants are<br />

uncertain about how to conduct a feasible brand assessment exactly.<br />

Nowadays there are two main perspectives that may be adopted. The first one considers brand equity in the<br />

context of marketing analytical lens. The second one is financially based. Each perspective has its advantages and<br />

disadvantages. To support the launch of new brand, it is better to appraise brand perspectives based on the<br />

customers’ and clients’ perception (marketing perspective and marketing analytical lens). However, if brand exist<br />

and the core lies more on the brand’s performance within a certain analytical lens, then the financial evaluation of<br />

the brand tends to be more helpful and meaty in terms of strategic corporate finance research and value based<br />

management. The brand financial evaluation also becomes increasingly vital as recently, more and more brands<br />

have to be investigated for their value due to company buy outs, mergers or the transfer of divisions, alliances<br />

activity and different kinds of network collaboration towards foreign markets or even franchise, licensing. The<br />

review of the brand valuation models shows that there are many works devoted to the assessment of brand equity,<br />

but in spite of this fact, there is not until today a valid and comprehensible model. Most of researches are based on<br />

assumptions, which it is almost impossible to prove empirically.<br />

2.1 Background approach<br />

In brand valuation theory all models can be split into three groups:<br />

� traditional economic brand valuation models;<br />

� client and behaviorally-oriented brand valuation models;<br />

� holistic economic and behaviorally-oriented brand valuation models.<br />

Further the most significant studies presented the above models are described.<br />

2.1.1 Traditional economic brand valuation models<br />

Capital market-oriented brand valuation (Simon and Sullivan, 1993). Simon and Sullivan determined brand equity<br />

as “the incremental cash flows which accrue to branded products over and above the cash flows which would result<br />

from the sale of unbranded products.” (Simon, Sullivan, 1993) To fulfill this definition, it estimates the current<br />

market value of the company. The market value of the equity gives an understanding and impartial market<br />

estimation of the possibilities and future cash flows. Then the authors extract the brand value from the value of the<br />

other assets of the company, which consists of tangibles and its remaining intangible assets: the value of other<br />

specific assets is not associated with the brand equity and market-specific factors that lead ultimately to imperfect<br />

441


competition and market falls. Thus, the resultant estimate of brand equity is based on the market valuation of the<br />

future cash flows.<br />

Market value-oriented brand valuation. In a market value-oriented approach, the value of a brand is established<br />

by the fair value of comparable brands. However, this procedure is reliable, if the market position of the reference<br />

brand and the brand to be evaluated are more or less identical, this research assumption is hardly to be realistic, but<br />

with no doubt to be vital.<br />

Cost-oriented brand valuation. By this model the pure value of assets equals total assets of the company less all<br />

obligations. There are two alternatives to the net asset value approach: to assess the brand value by means of the<br />

costs (historical or perceived) which have arisen while brand was launched in the past, and to assess expenses which<br />

would arise today respectively.<br />

Earning capacity-oriented brand valuation (Kern’s model). According to this approach, the brand value is<br />

calculated by capitalizing the value of potential earnings (earnings to be discounted) received by the company,<br />

which possesses the brand with the earnings capacity.<br />

Customer-oriented brand valuation. Sometimes renewals are not subject to the decision making in marketing.<br />

So, the earning-capacity indicator in these models is based not only on the net income but on the average customer<br />

contribution to net income margin attained by a brand product and brand loyalty rate. As it could be noticed,<br />

traditional economic brand valuation models are related to the financial perspective of brand equity. They measure<br />

the financial part of brand value and do not take into account the particular customers’ and clients’ influence to<br />

brand value.<br />

2.1.2 Client and behaviorally-oriented brand valuation models<br />

Aaker brand valuation model (Aaker, 1991). The model points out five determinants of brand value: brand loyalty,<br />

brand awareness, perceived quality, brand associations and other brand assets. It regards brand equity from a<br />

consumer’s perspective. However, this model does not include the quantity value.<br />

Kapferer brand valuation model (Kapferer, 1992). This approach assumes that the brand value reduces buying<br />

risk of customers and is determined as the differential effect of brand knowledge on customer response to the<br />

marketing mix efforts to promote the brand.<br />

Keller brand valuation model (Keller, 1993). This model refers to the assumption that the brand value is<br />

corresponding with the knowledge of the brand. It is based on comparison with an unbranded product. Brand<br />

knowledge consists of brand awareness and brand image.<br />

McKinsey brand valuation model (McKinsey, 1994). McKinsey defines the three “P” - Performance, Personality<br />

and Presence - of the brand, the main components of the strong brand. McKinsey estimates the brand strength as the<br />

function of those three “P”.<br />

To sum up, client and behaviorally-oriented brand valuation models focus on the consumer’s behavior and<br />

attitudes.<br />

2.1.3 Holistic economic and behaviorally-oriented brand valuation models<br />

These models include both financial and client-oriented indicators.<br />

Semion brand value approach (Clifton et el., 2009). The approach defines Brand value drivers, such as:<br />

financials of the company, brand strength, brand protection and brand image. Firstly, the author ascertain the<br />

factor’s value for each of those determinants on the basis of the determined criteria, then the indicator for each<br />

factor is summed and aggregated to give a single value. Secondly, the values are summarized to produce the<br />

weighted factor. Afterwards, average earnings of the past three years are multiplied by the obtained weighted factor.<br />

Market-oriented brand valuation model (Bekmeier-Feuerhahn, 1998). This model says that the brand value is<br />

related to brand strength and brand earnings estimated by the means of market prices.<br />

442


Interbrand brand value approach. In this model there are two key elements of the valuation of a brand: the<br />

earnings referred to the brand and the brand strength multipliers. The multiplier shows the strength of the brand<br />

itself. “It takes into account seven factors: market leadership, brand stability, brand extension possibilities, current<br />

market prospect, and internationalization potential, adaptation to time, brand support and legal<br />

protection.”(Molameni, Shahrokhi, 1998). Then the brand strength is correlated to a multiple such as the price<br />

earning (P/E) ratio, which gauges the certainty in the future. The brand multiple then is be applied to earnings of the<br />

brand to extract the value of brand equity. Interbrand Group has made a graph known as the “S - curve” to relate the<br />

brand strength and multiplier. This curve is based on Interbrand’s examination of multipliers implicated in many of<br />

brand P/E ratios over the accepted period. Although, this approach seems to be comprehensive, its shortcomings<br />

can be pointed out, the brand weights are based on historical data.<br />

Intellectual value concept and stakeholders’ approach. As it is said, majority of above mentioned valuation<br />

models estimates only financial part of brand, others uncover clients’ and customers’ attitudes and behavior in<br />

brand valuation. To determine Brand value drivers holistic economical and behaviorally-oriented brand valuation<br />

models are to be used to include both financial estimates and comprehensive research. In this paper the combined<br />

approach is proposed, as it provides a more precise estimate of Brand value drivers to raise and to boost the<br />

corporate value according to the Intellectual capital concept. From the point of view of (Bayburina & Golovko,<br />

2008) the intellectual capital is a complexity of key qualitative characteristics of the company, which are not always<br />

objects of intellectual property, such as, for instance, competencies. Thus, intellectual capital is a complexity of<br />

knowledge, accrued experience of employees and intellectual property. It is important to mention that not all objects<br />

of engineering design are the objects of the intellectual property; however its importance for the process of the value<br />

creation is difficult to underestimate, for example, in the companies of the high tech industry. Simultaneously it is<br />

complicated to provide the full, complete, exact definition of the notion “intellectual capital of the company”. For<br />

this purpose it is reasonable to concentrate on its structure and the content. In less details it is possible to form a<br />

structure of intellectual capital, which consists of the 5 following components or the elements of the first level of<br />

intellectual capital; this approach has been used by the authors (Bayburina & Ivashkovskaya, 2007; Bayburina &<br />

Golovko, 2008; Bayburina & Golovko, 2009). Briefly, the structure of intellectual capital could be described as<br />

follows:<br />

1. human capital (key knowledge and abilities of the personnel);<br />

2. process capital (key characteristics of the business processes of the company);<br />

3. client capital (key features of the company which are necessary to manage customer relationship and loyalty);<br />

4. innovation capital (renovation techniques to maintain the future growth of the company);<br />

5. network capital (synergy which occurs from the interactions of the company).<br />

The complexity of such characteristics represents the competitive advantage originated inside and within the<br />

company. According to (Bayburina & Golovko, 2008) from the one hand the volatility of the business environment<br />

from the other hand the opposite stakeholders interests stipulate the fact that value becomes the result of constant<br />

strategic changes. In this case the value becomes a complicated intellectual parameter, which is managed and<br />

defined by the multi-level combination of interactions of different groups of stakeholders (e.g. clients). By this<br />

pattern the value is generated by its intellectual part. So, (Bayburina & Golovko, 2008) conclude, that the<br />

Intellectual value of the company is a part of the total value, created through the process of the intellectual<br />

components’ accumulation. This value exactly as the brand value can be “traced” and should be traced by the<br />

external stakeholders of the company.” (Lev & Sougiannis, 1999; Singh & Van der Zahn, 2009).<br />

2.2 Hypotheses<br />

The methodology will utilize the combined approach, as the explanation of the results, which consequentially and<br />

indirectly include attitude of consumers and clients. After determining the brand value, Brand value drivers, the<br />

influence of different factors will be estimated; among those factors are well-known fundamental financials usually<br />

used to be investigated according to the Intellectual capital concept in terms of value following Bayburina and<br />

Golovko (Bayburina and Golovko, 2009). According to above mentioned Intellectual capital concept in terms of<br />

value the hypotheses of this study are:<br />

443


Hypothesis 1.Positive influence of the company size on the brand value. This hypothesis is based on the fact,<br />

that the large company has a range of advantages. It has more stable position on the market, the consumers and the<br />

suppliers try to cooperate with such a company, as it is more observable and reliable. The large size allows the<br />

company to reduce production costs, to generate innovative products and to conduct great marketing strategies.<br />

Hypothesis 2. Positive influence of the company’s cumulative marketing (advertising and promotion) expenses<br />

on the brand value until the moment, after which brand value returns diminish along with expenses increase.<br />

Advertising is one of the main components of marketing communications. It can increase brand name recognition<br />

and a value premium for reputation. Advertising creates special associations of the particular brand, which<br />

differentiates the products of a company and defends them from imitating by competitors (Mizik and Jacobson,<br />

2003). Also it can be noticed that advertising is a kind of the entry barrier for the other company, that is why it is<br />

necessary to spend the high amount of money to overthrow the established brand loyalty (Ho et al., 2005). Another<br />

marketing communication is trade promotion, which also can create a brand value (Keller, 2007). Promotion<br />

enhances brand attributes and help to identify the particular brand. In fact, altogether expense on promotional<br />

activities compound about 60–75% of promotional budgets as being spent on sales promotion (Belch and Belch,<br />

2007).<br />

Hypothesis 3. Positive influence of the company’s expenses on R&D, until the moment, after which brand value<br />

returns diminish along with expenses increase. It is assumed by the authors that companies with higher expenses on<br />

R&D have the stronger. This can be true, as new technologies and company’s know-how can also be viewed as an<br />

entry barrier, as the potential competitors need the high amount of money to get these advantages. To reach the<br />

settled goals the empirical research model will be tested and hypotheses will be verified.<br />

2.3 Research model<br />

(1) Brand valueit=α+(ρ 1 it,… ρ n it) X BVD + (β 1 it ,.….β k it )X FV +εit,<br />

(2) Brand value Ratioit=α+(ρ 1 it,… ρ n it) X IC + (β 1 it ,.….β k it )X FV +εit, where:<br />

Brand valueit – a vector of Brand value meanings, calculated according to the formula (3),<br />

Brand value Ratioit – a vector of Brand value ratios, calculated according to the formula (4),<br />

BVD – a vector of Brand value drivers;<br />

FV – a vector of fundamental variables;<br />

ε – a vector of random errors (“white noise”);<br />

I – BRICS company index;<br />

T – a year index;<br />

N – a particular Brand value driver index;<br />

K – a particular fundamental variable index.<br />

2.3.1 Dependent variable<br />

The dependent variable is the Brand value. Two ways of calculation are considered. Firstly Brand value equals the<br />

Market capitalization divided by the Net Assets.<br />

(3)Brand valueit = Market Capitalizationit / Net Assetit, where:<br />

I – BRICS company index;<br />

T – a year index;<br />

� Market Capitalizationit – Market value of all shares (USD);<br />

� Net Assetsit - Book value of net assets (USD).<br />

The second proxy dependent variable Brand value ratio, called the Q-tobin ratio, is calculated as the sum of the<br />

Market capitalization and the Total assets divided by the sum of the Total Assets and the Total liabilities.<br />

(4) Brand value Ratioit = Q-Tobinit = (Market Capitalizationit + Total Liabilitiesit) / (Total Assetsit + Total<br />

Liabilitiesit), where:<br />

I – BRICS company index;<br />

T – a year index;<br />

� Market Capitalizationit – Market value of all shares (USD);<br />

� Total Liabilitiesit - Book value of total liabilities (USD);<br />

� Total Assetsit - Book value of total assets (USD).<br />

444


2.3.2 Independent variables<br />

1. Independent variables: fundamentals<br />

� Market Capitalization divided by Total Assets,<br />

� Return on assets (ROA),<br />

� Return on equity (ROE),<br />

� Capital Expenditure divided by Total assets,<br />

� Ln (Revenue),<br />

� Ln (Market Capitalization),<br />

� Ln (delta Market Capitalization),<br />

� Ln (Total Assets),<br />

� Ln (Net Assets),<br />

� Ln (Ebit),<br />

� Ln (Equity),<br />

� Ln (Total Liabilities),<br />

� Ln (Capital Expenditure),<br />

� Ln (delta Capital Expenditure).<br />

Fundamental variables were included in the research model as “control” variables.<br />

2. Independent variables: Brand value drivers<br />

� Marketing Expenses in present time period,<br />

� Marketing Expenses in previous time period,<br />

� R&D Expenses in previous time period,<br />

� Ln (Marketing Expenses in present time period),<br />

� Ln (Marketing Expenses in previous time period),<br />

� Ln (R&D Expenses in present time period),<br />

� Ln (R&D Expenses in previous time period),<br />

� Ln (Total Assets divided by Number of Employees),<br />

� Ln (Number of Employees).<br />

Below there is a table, which presents Hypothetical signs of the coefficients.<br />

Independent variables<br />

Hypothetical sign of the coefficients<br />

(Brand value)<br />

Hypothetical sign of the coefficients (Q-<br />

Tobin)<br />

Market Capitalization divided by Total<br />

Assets<br />

+ +<br />

Return on assets - +<br />

Return on equity + -<br />

Capital Expenditure divided by Total assets + +<br />

Ln (Revenue) + +<br />

Ln (Market Capitalization) + +<br />

Ln (delta Market Capitalization) + +<br />

Ln (Total Assets) - -<br />

Ln (Net Assets) - -<br />

Ln (Ebit) + +<br />

Ln (Equity) - -<br />

Ln (Total Liabilities) + -<br />

Ln (Capital Expenditure) + +<br />

Ln (delta Capital Expenditure) + +<br />

Ln (Capital Expenditure divided by Total<br />

assets)<br />

+ +<br />

Marketing Expenses in previous time period + +<br />

R&D Expenses in previous time period + +<br />

Total Assets divided by Number of<br />

Employees<br />

+<br />

+<br />

445


Independent variables<br />

Hypothetical sign of the coefficients<br />

(Brand value)<br />

Hypothetical sign of the coefficients (Q-<br />

Tobin)<br />

Number of Employees + +<br />

Ln (Marketing Expenses in present time<br />

period)<br />

+<br />

+<br />

Ln (Marketing Expenses in previous time<br />

period)<br />

+<br />

+<br />

Ln (R&D Expenses in present time period) + +<br />

Ln (R&D Expenses in previous time period) + +<br />

Ln (Total Assets divided by Number of<br />

Employees)<br />

+<br />

+<br />

Ln (Number of Employees) + +<br />

Time influence Significant Significant<br />

Industry influence Significant Significant<br />

2.4 Sample and sources of data<br />

Table 1. Hypothetical signs of the coefficients.<br />

The research conducted in this study is dedicated only to BRICS companies because it is a widespread opinion that<br />

the BRICS countries are the most interesting in terms of sample of emerging markets to investigate. The term<br />

“BRIC” was launched by the economist from Goldman Sachs Investment Bank Jim O’Neill in 2003. After 2011<br />

year conference in Davos, it was proposed to include South Africa in BRIC emerging markets group, because it<br />

reflects all the issues related to Brazil, Russia, India and China. The BRIC countries as Brazil, Russia, India and<br />

China are frequently referred to have attracted media and academic attention in the recent years. These countries are<br />

different from one another over their culture, background, language, and the structure of their economies. However,<br />

they have a common denominator: economic growth development in the BRICS has greatly exceeded growth<br />

compared to the world's leading industrialized nations. Even after the economic crisis that started in 2007, they<br />

continued outperforming the rest of the world. While in 2009 large economies shrunk as much as 6%, (e.g., Japan<br />

and Germany), Brazil stayed steady, India grew 5.9%, and China 8.1%; only Russia was the group's bad performer<br />

shrinking 7%. One more common characteristic is that the financial assets of these 5 countries are underestimated<br />

and have a great potential of growth. The forecast of their future performance will further increase the interest on<br />

these countries and also gives the reason to pick exactly this set of countries to develop the research in this study.<br />

While all of BRICS countries are associated with the most fast developing markets, the potential of each<br />

country is estimated as being at the different levels. After the crisis the scholars have divided the scope of countries<br />

into 2 parts – “the source economies” (Brazil, Russia and South Africa) and “the pushovers” (India, China). That’s<br />

why, in this paper it was decided to develop two models for each group separately. Taking into account the necessity<br />

to assess companies from different countries the final research sample was formed along with the following criteria:<br />

� The sample is formed from BRICS companies listed from the year 2005 till 2010.<br />

� Such research is of special interest, owing to the fact, that, from one hand, BRICS countries are leaders of<br />

emerging economies, though, at the same time, they differ dramatically from each other. It should be<br />

mentioned, that the information used in the study should be available in the open sources (e.g. annual reports or<br />

other information on websites), thus making it useful and acceptable to trace for a wide range of users. In order<br />

to ensure comparability, companies should have financial reports along with the National GAAP accounting<br />

standards. As the results of previous studies show (Bayburina, Golovko, 2009), the number of companies,<br />

especially from India and Brazil, which have IFRS financial reports in the period earlier than 2005, is not quite<br />

high and is not sufficient enough for the research purposes, thus it is necessary to impose restrictions on usage<br />

of such type of reports in the research.<br />

� The sample is formed from BRICS companies about which there is enough disclosed Brand data. Observation<br />

period: 2005-2009 years (5 full financial years). Number of observations: 306.<br />

Number of Companies Country of residence<br />

71 Brazil<br />

58 Russia<br />

61 India<br />

57 China<br />

59 South Africa<br />

Table 2. BRICS sample.<br />

446


As the sources of data the following sources were used:<br />

1. Bloomberg data base (fundamental factors’ and Brand value drivers).<br />

2. Corporate official web-sites (Brand value drivers).<br />

3. Corporate accounting reports (fundamental factors’ and Brand value drivers).<br />

To reach the main goal of the research the special research model has been introduced and a series of tests has<br />

been held. The data research was fulfilled with the use of software tools (program packages Stata 11.0, Eviews 7.0,<br />

Microsoft Excel 2007).<br />

3. Results<br />

The pooled sample of BRICS was split into 2 parts: “India, China”, “Brazil, Russia, South Africa”. In order to verify<br />

each hypothesis a series of linear regression tests has been held. Final models of the research have significant factors<br />

(at no less than 5% level of significance; and the constant of the model is significant at 6,5% level of significance)<br />

and the meaning of coefficient of determination is the highest (R2 (“India, China” Model)= 0.8593 and R2(“Brazil,<br />

Russia, South Africa” Model)= 0.2442 ).<br />

Test for normality. Both models were tested for normality of variables distribution, Skewness-Kurtosis Test for<br />

Normality was held.<br />

Test for Multicollinearity. All models were tested for the presence of multicollinearity. In every final model all<br />

VIFs are less than 4. The meaning of VIF contributed to 1.89 for the chosen set of variables in subsample “India,<br />

China” and 2.07 for the chosen set of variables in subsample “Brazil, Russia, South Africa”.<br />

Test for Heteroskedasticity. All models were tested for the presence of heteroskedasticity. All tests for the final<br />

model confirm their homoskedasticity.<br />

“India, China”model. Model has several variables which are significant: Market Capitalization divided by<br />

Total Assets, Return on equity, Ln (Equity), Ln (Marketing Expenses in present time period), Ln (Total Assets<br />

divided by Number of Employees). To choose the final model specification it was necessary to hold model<br />

specification tests, such as: the Wald Test, Breusch-Pagan Test, Hausman Test:<br />

1. Wald Test showed that the Fixed Effect Model provides a better explanation than the Pooled-up Model<br />

2. Breusch-Pagan Test showed that the Pooled-up Model provides a better explanation than the Random Effect<br />

Model.<br />

3. Hausman Test showed that the Fixed Effect Model provides a better explanation than the Random Effect Model.<br />

4. The Fixed Effect Model is final.<br />

Test Statistics<br />

Wald test F test that all u_i=0; F(104, 298) = 6.48<br />

Prob > F = 0.0000<br />

Breuch-Pagan test chi2=115.50 Prob>chi2=0,0000<br />

Hausman test chi2=67.08 Prob>chi2=0,0000<br />

The final model is presented below:<br />

Table 3. Specification tests for “India, China”model<br />

(3) Brand valueit = 20.425+ 1.856 � MC/TAit + 3.357 �ROEit - 3.054 � LnEit+ 1.308 �lnTA/NEit + 1.025 � LnMEit,<br />

where:<br />

� MC/TA – Market Capitalization divided by Total Assets;<br />

� ROE – Return on equity;<br />

� LnE– Natural logarithm of Equity;<br />

� LnTA/NE– Natural logarithm of Total Assets divided by Number of Employees;<br />

� LnME – Natural logarithm of Marketing Expenses in present time period;<br />

447


� I – BRICS company index;<br />

� T – a year index.<br />

“Brazil, Russia, South Africa” model. Model has several variables which are significant: Return on assets, Capital<br />

Expenditure divided by Total assets, Ln (Revenue), Ln (Market Capitalization), R&D Expenses in previous time<br />

period, Total Assets divided by Number of Employees, Number of Employees, Ln (R&D Expenses in present time<br />

period). To choose the final model specification it was necessary to hold model specification tests, such as: the Wald<br />

Test, Breusch-Pagan Test, Hausman Test:<br />

1. Wald Test showed that the Pooled-up Model provides a better explanation than the Fixed Effect Model.<br />

2. Breusch-Pagan Test showed that the Pooled-up Model provides a better explanation than the Random Effect<br />

Model.<br />

3. Hausman Test showed that the Fixed Effect Model provides a better explanation than the Random Effect Model.<br />

4. The Pooled-up Model is final.<br />

Test Statistics<br />

Wald test F test that all u_i=0; F(168, 570) = 0.93<br />

Prob > F = 0.6985<br />

Breuch-Pagan test chi2= 0.32 Prob>chi2= 0.5727<br />

Hausman test chi2=17.43 Prob>chi2=0.0078<br />

The final model is presented below:<br />

Table 4. Specification tests for “Brazil, Russia, South Africa” model<br />

(4) Brand value Ratioit = -881.381 + 0 .006 � NE it + 0 .002 � TA/NE it + 0 .696 � R&D 0 it + 0 .401 � ROA it +<br />

51.042 � LnMC it + 158.758 � Capex/TA it + 61.764 � LnR&D it - 0.003 � LnR it, where:<br />

� NE – Number of Employees;<br />

� TA/NE – Total Assets divided by Number of Employees;<br />

� R&D 0 - R&D Expenses in previous time period;<br />

� ROA–Return on assets;<br />

� LnMC – Ln (Market Capitalization);<br />

� Capex/TA - Capital Expenditure divided by Total assets;<br />

� LnR&D - Ln (R&D Expenses in present time period);<br />

� LnR- Ln (Revenue);<br />

� I – BRICS company index;<br />

� T – a year index.<br />

4. Conclusions<br />

The world financial crisis emphasized the problem of the brand valuation, booming brand expectations were ruined<br />

by the world economic transformations. The brand valuation critics predominantly from academics and business<br />

actors cover both classical financial and modern approaches towards valuation. Financial crisis made the liquidity<br />

and stable cash flows as the main current goal to achieve of major companies of developed and emerging markets.<br />

The obvious contradiction between current values and fundamental drivers, which are unique in the process of value<br />

creation form the research ground. For example, the acquisition of brand “Skype” was so promising for its buyer<br />

“Ebay”, however a lot was omitted due to the obvious limitations of the solely financial approach to the brand<br />

valuation, later after the acquisition the overall management results were very far away from desired goals. It should<br />

be noted that still, those business actors who tend to survive and stand the economic crisis continued to run the value<br />

of their brands on the basis of its internal fundamental drivers such as marketing and client capital, organizational<br />

capital (innovations and technology), qualified personnel and human capital (Bayburina, Levkin, 2010). Thus, the<br />

main research discussion covers the question whether it is necessary to consider the fundamental value drivers,<br />

which are different from financials and to use the complicated analytical tools, whether the value of the brand is the<br />

fundamental or temporal corporate value driver. The framework of the empirical research is based on the<br />

stakeholders’ theory and the intellectual capital concept. In this investigation the authors set a goal to estimate Brand<br />

values drivers by the means of the panel data. Therefore the paper provides the empirical research of Brand value<br />

drivers and Brand value within the large BRICS companies over the 5-year period.<br />

448


The research shows that the Brand value drivers, non-fundamental components of the Intellectual capital are<br />

rather significant in terms of sustainability. The greatest effect of Brand value drivers, non-fundamental<br />

determinants can be noticed in “India, China” sample, with Brand value as dependent variable, and “Brazil, Russia,<br />

South Africa” sample with Brand value ratio (Q-Tobin) as dependent variable. The contributions of marketing and<br />

R&D expenses to Brand value are obvious. All coefficients are positive. It verifies the second and third hypothesis<br />

and means that most companies of BRICS spend not so much for these needs. (Chu, Keh, 2006). Otherwise it<br />

developed that was impossible to test those parts of both hypotheses, which suppose that there is trigger point, after<br />

which returns to brand value diminish and expenses increase, still by the largest companies of BRICS this trigger<br />

point is not achieved yet. The authors can conclude that BRICS companies don’t invest enough money into brand<br />

promotion as it is executed in large multinationals of developed countries. The practical application of this result is<br />

obvious: spending its money on the marketing and R&D can give the company an opportunity to create the<br />

competitive advantage. Otherwise the efficiency of such expenses depends on each particular emerging country<br />

features. In “India, China” subsample marketing expenses are significant and in “Brazil, Russia, South Africa” -<br />

R&D expenses. It can be explained by the cost of marketing and R&D in different countries. In subsample “Brazil,<br />

Russia, South Africa” R&D costs are quite low versus “India, China” subsample. Marketing costs are lower in<br />

“India, China” subsample, than in “Brazil, Russia, South Africa” sample. It is possible to assume that for Indian and<br />

Chinese large public companies it is necessary to develop the brand promotion programs and spend much more on<br />

it, this is the field of inevitable and obvious Brand value boosting. It is possible to conduct that for Brazilian,<br />

Russian, South African large public companies it is necessary to invest more over Research and Development<br />

strategic roadmaps, with no doubt that is the obvious Brand value driver. In all cases it can be noticed, that the<br />

increase of number of employees has the positive impact on the brand value. In all public companies of emerging<br />

capital markets the large size allows the company to reduce production costs, to generate innovative products and to<br />

conduct great marketing promotional activities. Summing up, the research shows that the Brand value drivers are<br />

the matter of concern. For sure they have an influence on corporate value and can have a great importance in the<br />

company’s existence taking into consideration the global competition. BRICS large companies should try to increase<br />

expenses on marketing and R&D, as they have the enormous potential for the growth. In the future research it is<br />

worth to pay attention to other components of brand value and to explore the changes conducted by the type of<br />

industry and each particular year (post-crisis and after-crisis features).<br />

5. References<br />

Aaker, D. (1991) Managing Brand Equity, San Francisco: Free Press.<br />

Aaker, D. (1996) Measuring Brand Equity across Products and Markets, California Management Review, 102-120.<br />

Bayburina, E. (2007) Intellectual Capital Investigation Techniques as the Key Trigger of the Sustainable Long-term<br />

Company Development, Corporate Finance (e-journal), Vol. 3, 85-101 (In Russian).<br />

Bayburina, E. & Golovko, T. (2008) Intellectual Value of the company and the factors of its growth: a panel study<br />

of the large Russian companies, Corporate Finance (e-journal), Vol. 6, 5-19 (In Russian).<br />

Bayburina, E. & Golovko, T. (2009) Design of Sustainable Development: Intellectual Value of Large BRIC<br />

Companies and Factors of their Growth, Journal of Knowledge Management (e-journal),Vol.7 (5), 535-558<br />

Bayburina E. & Levkin D. (2010) Brand stabilizer bottom-line: empirical research of large Russian companies,<br />

Contemporary Views on Business - Partnering for the Future, Conference Proceedings, Helsinky, 312-313.<br />

Bekmeier-Feuerhahn, S. (1998) Marktorientierte Markenbewertung. Eine konsumenten- und unternehmensbezogene<br />

Betrachtung, Wiesbaden.<br />

Belch, G.E., Belch, M.A. (2007) Advertising and promotion: an integrated marketing communications perspective,<br />

7th(ed.), McGraw Hill.<br />

Clifton et el. (2009) Brands and branding, NY: Bloomberg Press.<br />

Chu, S., Keh, H. (2006) Brand Value Creation: Analysis of the Interbrand-Business Week Brand Value Rankings.<br />

Market Letters, 323-331.<br />

Ferjani, M., Jedidi, K., Jagpal, S. (2009) A Conjoint Approach for Consumer- and Firm-Level Brand Valuation.<br />

Journal of Marketing Research, 846–862.<br />

449


Ho, Y.K., Keh, H.T., Ong, J. (2005) The effects of R&D and advertising on firm value: An examination of<br />

manufacturing and non-manufacturing firms, IEEE Transactions on Engineering Management, 52(1), 3–14.<br />

Jucaityte, I., Virvilaite, R. (2007) Integrated Model of Brand Valuation, Economics and Management, 376-383.<br />

Kallapur, S., Kwan, S. (2004) The Value Relevance and Reliability of Brand Assets Recognized by U.K. Firms,<br />

Accounting Review, 151-172.<br />

Kamakura, W., Russell, G. (1993) Measuring Brand Value with Scanner Data, Elsevier Science Publishers B.V, 9-<br />

22.<br />

Kapferer, J.N. (1992). Strategic brand management: new approaches to creating and evaluating brand equity,<br />

Kogan Page, London.<br />

Keller, K. (1993) Conceptualizing, Measuring and Managing Customer-Based Brand Equity, Journal of Marketing,<br />

57(1), 1-22.<br />

Keller, K. (2007) Strategic Brand Management Building, Measuring and Managing Brand Equity, 3rd(ed.), Upper<br />

Saddle River, NJ: Prentice Hall.<br />

Lev, B., Sougiannis, T. (1999) Penetrating the book-to-market black box: the R&D effect, Journal of Business<br />

Finance and Accounting, Vol. 26 (3/4), 419-49.<br />

McKinsey, F. (1994.) Winning the right to brand, Brand Strategy Newsletter, November.<br />

Mizik, N., Robert, J. (2003) Trading Off Between Value Creation and Value Appropriation: The Financial<br />

Implications of Shifts in Strategic Emphasis, Journal of Marketing, Vol. 67(1), 63-76.<br />

Molameni, R., Shahrokhi, M. (1998) Brand Equity Valuation: a Global Perspective, Journal of Product and Brand<br />

Management, 275-290.<br />

Sattler, H., Hogl, S., Hupp, O. (2002) Evaluation of the Financial Value of Brand, Research Papers on Marketing<br />

and Retailing, University of Hamburg.<br />

Singh, I., and Van der Zahn, M.. (2009) Intellectual capital prospectus disclosure and post-issue stock performance,<br />

Journal of Intellectual Capital, Vol. 10(3), 425 – 450.<br />

Simon, C., Sullivan, M. (1993) The Measurement and Determinants of Brand Equity: a Financial Approach,<br />

Marketing Science, 28-52.<br />

Virvilaite, R., Jucaityte, I. (2008) Brand Valuation: Viewpoint of Customer and Company, Engineering Economics,<br />

28-52.<br />

450


GOVERNANCE, BOARD INDEPENDENCE, SUB COMMITEES AND FIRM PERFORMANCE:<br />

EVIDENCE FROM AUSTRALIA<br />

Wanachan Singh, Suan Dusit Rajabhat University, Bangkok, Thailand<br />

Robert T Evans, Curtin University, Perth, Australia<br />

John P Evans, Curtin University, Perth, Australia<br />

Email:John.Evans@curtin.edu.au , www.curtin.edu.au<br />

Abstract. This study investigates whether the monitoring of company management by an independent board of directors serves to<br />

enhance firm performance in Australia. The paper is of interest in that it explores the impact of sub-committees such as audit,<br />

remuneration, and nomination as well as the quality and size of the audit firms. From the perspective of a regulator our findings have<br />

implications and suggest that the imposition of sub-committees and the respective composition does not in itself align the interest of<br />

managers and shareholders.<br />

Keywords: Governance, board of directors, agency theory, Australia<br />

1 Introduction<br />

Agency theory defines the agency relationship where the principal (or owner) delegates tasks to an agent (or<br />

manager). The theory highlights costs associated with the principal-agent relationship which include the<br />

opportunistic behaviour or self-interest of the agent taking priority over the principal’s interest. Mallin (2004)<br />

highlighted a number of dimensions to this including the agent misusing their power for financial or other<br />

advantage, and the agent not taking appropriate risks in pursuance of the principal’s interests – often because<br />

managers are more risk-averse than the companies they lead. Another cost arises due to the principal and agent<br />

having access to different levels of information; the agent (manager) usually being in control of superior and more<br />

detailed information than that of the owner (information asymmetry). This requires the owner to institute expensive<br />

monitoring of the managers actions to redress the knowledge imbalance.<br />

Fama (1980) argued that the boards of directors provide the most critical internally-based method for<br />

monitoring the performance of managers. They have the ability to directly oversee the performance of managers and<br />

to offer incentives to managers which reward performance in line with owner expectations, or equally, discipline<br />

managers when these are not met. Fama and Jensen (1983) note that effective monitoring requires the board of<br />

directors to act as an independent arbiter between management and owners, as collusion could result in an overall<br />

loss to the owners. To assess the degree to which a board may act independently to monitor the performance of<br />

managers, a number of key measures of board independence have been studied. These include the size and structure<br />

of the board, with the former referring to the number of directors and the latter generally being the proportion of<br />

non-executives, the number of sub-committees, the proportion of non-executives on subcommittees and whether the<br />

CEO and chairperson positions are combined. If monitoring is effective, a positive relationship should exist between<br />

the level of board independence and firm performance (Mura 2007; Choi, Park and Yoo 2007; Schmid and<br />

Zimmermann, 2008).<br />

There has been little research done on board-monitoring subcommittees particularly the existence of them and<br />

their independence (Gales and Kesner 1994; Dalton et al. 1998). This study extends the work of previous researchers<br />

to include the number of board subcommittees established by the main board as well as their composition as part of<br />

the internal corporate governance mechanisms that potentially affect firm performance.<br />

2 Background<br />

A number of empirical studies have been undertaken to assess the degree to which the independence of the Board<br />

(as reflected in the above characteristics) impact on company performance. Rosenstein and Wyatt (1990) assessed<br />

whether share prices respond positively when additional outside directors are appointed to companies. They found a<br />

statistically significant, but economically-small effect on prices averaging around 0.2%. They found this effect to be<br />

slightly stronger when the appointment is for a director representing a financial institution versus those in other<br />

451


usiness. Their (1997) study focused on the price effect of the appointment of insiders (managers) to the board.<br />

They reported associated stock price decreases, although these were only significant when an insider is added to the<br />

board of a company where inside directors own a significant proportion (5–25%) of the shares.<br />

Consistent with the above study, Klein (1998) found only modest association between firm performance and<br />

overall board composition with directors divided into insiders, outsiders and affiliates. She did however isolate<br />

significant positive relationships between the percentage of executive directors on finance and investment subcommittees<br />

with both accounting and market performance measures. Additionally, weak evidence is provided of a<br />

positive relation between firm performance and the presence of at least one outside director holding at least five<br />

percent of the firm’s equity.<br />

Fich (2005) analysed 1,493 first-time director appointments to Fortune 1000 boards, for the period 1997–99, to<br />

assess whether certain outside directors produce more positive share price reactions. The study found that appointees<br />

who are CEOs of other companies result in a more positive share price reaction than for other appointments. In<br />

addition, the study revealed positive long-term performance benefits in firms that appoint outside CEOs as directors.<br />

However, the study found that the CEOs experienced negative stock-price effects at their own firms when they<br />

accepted outside director positions and this was particularly pronounced if the CEOs’ own firms were faced with<br />

significant growth opportunities.<br />

Mura (2007) also examined the relationship between firm performance and board composition. The results,<br />

which included a test for endogeneity among variables, confirmed that the direction of the relationship moves from<br />

board composition to performance. A significant positive coefficient on the variable of the proportion of outside<br />

directors showed that the proportion of non-executives on the board has a positive impact on firm performance. The<br />

study supported the idea that non-executive directors are effective monitors of the firms’ management for this U.K.based<br />

sample.<br />

The level of outsider representation on the sub-committees of the main board has also been associated with the<br />

independence of the board and reduced agency costs. For example in Australia, the Bosch Committee (1995)<br />

proposed that audit committees should comprise a majority of non-executive directors. Stapledon and Lawrence<br />

(1997) concluded in a study of the top 100 Australian companies that independent directors are better-placed to<br />

effectively monitor executive management. Vance (1983), drew a similar conclusion believing that their<br />

independence ensures both objectivity and that there are sufficient ‘checks and balances’ on managerial behaviour.<br />

Menon and Williams (1994, p.125) similarly stated that the presence of executive managers on an audit committee<br />

precludes them from being an objective monitor of management. Finally, Cotter and Silvester (2003, p.214) reported<br />

that “independent audit committees can reduce agency costs by minimising the opportunistic selection of financial<br />

accounting policies, and by increasing the credibility and accuracy of financial reporting”.<br />

The independence of remuneration committees which determine the reward packages for senior executives is<br />

also considered essential (Bosch 1995). Kesner (1988) believed this determination to be central to the monitoring<br />

role of the board as it evaluates the performance of the managers in line with company goals and sets appropriate<br />

rewards for performance. Cotter and Silvester (2003) noted that an independent committee is far more likely to<br />

determine a fair and equitable reward package, thereby reducing agency costs, than if executives are responsible for<br />

setting their own pay.<br />

Laing and Weir (1999) studied the relationship between the governance structures proposed by the Cadbury<br />

Committee (1992) and corporate performance in the U.K.. The sample studied consisted of 115 U.K. non-financial<br />

quoted companies for the years 1992 and 1995. The study found a positive impact of board subcommittees on<br />

performance (measured as return on assets) in both years. Companies with remuneration and audit committees<br />

outperformed others. Additional evidence found was a significant improvement in performance for those firms<br />

which introduced audit and remuneration committees during the periods under examination, suggesting that the<br />

establishment of board committees is an effective monitoring mechanism. A further study by Weir and Laing (2000)<br />

found the presence of a remuneration committee has a positive effect on performance as measured by market returns<br />

but not on the accounting performance. Similarly, the return on assets is lower if firms have more outside directors<br />

on the board, but this is not reflected in the market returns.<br />

452


The independence of the board is also considered in the context of the leadership structure in operation, i.e. are<br />

the roles of CEO and chair combined or separated? The empirical evidence on this question is limited however, but<br />

generally favours that a separation of roles leads to improved performance. An empirical study by Rechner and<br />

Dalton (1991) found that companies with separated roles outperform firms with combined roles. They assessed that<br />

their findings “may provide empirical support for some strongly-worded admonitions about a governance structure<br />

that includes the same individual serving simultaneously as CEO and board chairperson” (p.59).<br />

Pi and Timme (1993) produced similar results in a sample of banks over the 1987–90 period. Their study, which<br />

included controls for firm size and other variables, determined that the accounting-based measure (return on assets)<br />

was higher for firms with separated roles. Baliga et al. (1996) used a market-based measure (Market Value Added)<br />

to similarly determine that a dual leadership structure produced superior returns in the long run.<br />

Elsayed (2007) examined the extent to which board leadership structure (as proxied by CEO duality) impacts<br />

corporate performance in Egyptian-listed companies. Two alternative measures of corporate performance were used:<br />

return on assets and Tobin’s Q ratio. The sample was taken from Egyptian public limited firms over the time period<br />

2000–04. The data on board leadership structure, corporate performance and other related variables were available<br />

for 92 firms in 19 different industrial sectors. The findings initially indicated that board leadership structure has no<br />

direct impact on corporate performance. However, additional analysis revealed that the impact of CEO duality on<br />

corporate performance varied with financial performance of firms and across industry types. Consistent with the<br />

finding of Finkelstein and D’Aveni (1994), CEO duality is preferable in low-performance firms. Overall, the<br />

findings support the conclusion of the meta-analysis of Rhoades, Rechner, and Sundaramurthy (2001) and the<br />

argument of Boyd (1995) and Brickley, Coles, and Jarrell (1997), in that the relationship between board leadership<br />

structure and corporate performance may vary within the context of firms and industry and that CEO duality will<br />

only be advantageous for some firms whilst not for others.<br />

Board sub-committees were investigated by Weir, Laing, and McKnight (2002) using 311 quoted, non-financial<br />

U.K. firms covering the period 1994–96. They found little evidence that board structure affects performance. Their<br />

results also indicated that the structure and quality of board subcommittees have little impact on performance – a<br />

finding consistent with Klein (1998), Vafeas and Theodorou (1998) and Dalton et al. (1998). However, the study<br />

found that companies in the top performance deciles have a greater proportion of independent directors both on their<br />

boards and on their audit committees.<br />

The literature on governance characteristics and performance is still developing, particularly in its application to<br />

countries outside the U.S. and U.K. Generally, public policy has preceded in advance of rigorous empirical research<br />

findings, with many countries imposing regulation or codes of conduct upon companies. These have usually been<br />

based on the assumption that an independent board is in the best interest of shareholders, although the literature<br />

provides only modest support for this view.<br />

3 Hypothesis Development<br />

Agency theory suggests the use of effective corporate governance mechanisms to mitigate manager–shareholder<br />

conflicts and to monitor the performance of managers. Hypotheses relating to effective corporate governance<br />

mechanisms are developed under three main categories: the proportion of independent directors who form part of<br />

the board; the leadership structure of the CEO and chairperson; the existence and independence of various board<br />

sub-committees; and the role of external auditors.<br />

3.1.1 Proportion of outside directors<br />

National corporate governance codes of conduct generally focus on how boards of directors should be structured in<br />

order to generate independent control of companies. Most often they prescribe a minimum representation of nonexecutive<br />

directors as a means of achieving sufficient independence from management (Bosch 1995; NACD 1996;<br />

Holmstrom and Kaplan, 2003). Non-executive directors are believed to play an important role in monitoring, and<br />

perhaps challenging if needed, management. This is supported in agency theory which suggests that effective<br />

monitoring leads to a reduction in agency costs since managers have fewer opportunities to build their personal<br />

wealth at the expense of shareholders.<br />

453


Despite the above, empirical evidence on the value of non-executive directors on boards is mixed. Rosenstein<br />

and Wyatt (1990) showed that a positive stock price reaction follows the appointment of non-executive directors to<br />

company boards. Weir, Laing, and McKnight (2002) found that the presence of non-executive directors on U.K.<br />

boards positively influenced the Tobin’s Q ratio of companies. Using a large panel dataset of U.K. firms for the<br />

period 1991-2001, Mura (2007) found significant positive association between the proportional representation of<br />

non-executives on the board and firm performance. Similarly, Choi, Park, and Yoo (2007) reported that outside<br />

directors in Korea have a significant and positive effect on firm performance.<br />

In contrast, Hermalin and Weisbach (1991), Mehran (1995), Klein (1998), Dalton et al. (1998), Vafeas and<br />

Theodorou (1998) and Laing and Weir (1999) all found no significant performance relationship. A possible<br />

explanation being that boards dominated by outsiders may lack company-specific operational knowledge to<br />

effectively guide the company (Klien, 1998).<br />

On balance, it is expected that a majority outsider representation best enables efficient monitoring activities<br />

leading to a fall in agency costs. To investigate this relationship further, this study examines the relationship<br />

between board composition and firm performance as follows:<br />

Hypothesis 1:<br />

The proportion of non-executive directors serving on the board will be positively associated with improved<br />

monitoring of management and consequently, higher firm performance.<br />

CEO duality<br />

A question that has received growing attention in corporate governance literature is whether there is a relationship<br />

between board leadership structure and corporate performance. It is often speculated that the presence of a<br />

combined CEO/chair compromises the independence of the board as the individual has sufficient power to<br />

unreasonably influence company decision-making (Cadbury 1992; Jensen 1993) and thereby reduce the board’s<br />

ability to both monitor and discipline the management team. When the CEO is also the board chairman, a single<br />

person holding both roles is more likely to dominate the board, as it “signals the absence of separation of the<br />

decision management and the decision controls” (Fama and Jensen 1983, p. 314). This could render the board<br />

ineffective in discharging its leadership and control duties (Daily and Dalton 1993; Jensen 1993), with CEOs free<br />

“to pursue their own interests rather than the interests of shareholders” (Weisbach 1988, p. 435). A structure of this<br />

type enhances CEO power and authority and compromises board independence (Finkelstein and D'Aveni 1994;<br />

Rhoades, Rechner, and Sundaramurthy 2001), and eventually leads to the incapability of protecting the interest of<br />

shareholders by boards of directors. Therefore, CEO duality is expected to increase agency costs and affect firm<br />

performance negatively.<br />

There have been some dissenting views, with researchers finding that the combination of the positions (CEO<br />

and chairperson ) enhanced company performance (Brickley, Coles, and Jarrell 1997; Coles, McWilliams, and Sen<br />

2001; Ying-Fen 2005).<br />

While the empirical consensus around the relationship between CEO duality and corporate performance is not<br />

resolved, both theoretical arguments and regulatory frameworks provide strong support for the separation of these<br />

roles. Hence:<br />

Hypothesis 2:<br />

Separation of the roles of the Chief Executive Officer and the Chair of the Board will be positively related<br />

to firm performance.<br />

The existence of subcommittees<br />

Board subcommittees are formed and used as another agency conflict-controlling mechanism by firms to organise<br />

their boards in such a way that they can make most effective use of their directors, with much of the key decisionmaking<br />

and implementation occurring at committee level (Kesner 1988; Bilimoria and Piderit 1994; Daily 1994,<br />

1996; Ellstrand et al. 1999). It has been widely promulgated that boards of public companies should have separate<br />

454


monitoring committees for auditing the company financial statements, supervising the compensation of executive<br />

directors and controlling the selection process of new directors (Lipman 2007; The Sarbanes-Oxley Act of 2002).<br />

The audit committee is responsible for nominating the outside auditor, overseeing the preparation of the financial<br />

statements and annual reports, ensuring the efficacy of internal controls, and investigating allegations of material,<br />

financial, ethical and legal irregularities (Anderson and Anthony 1986, cited in Ellstrand et al. 1999). The<br />

remuneration committee is responsible for establishing the level of compensation for senior corporate executives<br />

and corporate officers. In addition, the remuneration committee is charged with recommending appropriate<br />

compensation for corporate directors (Fisher 1986, cited in Ellstrand et al. 1999). Finally, the nomination committee<br />

is charged with the identification, selection, and evaluation of qualified candidates to serve in key positions within<br />

the corporation. More specifically, this committee is responsible for the selection of the CEO, directors and other top<br />

corporate executives (Vance 1983, cited in Ellstrand et al. 1999).<br />

The relationship between board monitoring subcommittees and company performance (corporate value) is a<br />

research area of corporate governance that has not been extensively studied (Gales and Kesner 1994; Dalton et al.<br />

1998). Some early supportive empirical evidence is provided by Wild (1996) and Laing and Weir (1999) who<br />

reported that a positive effect on firm performance resulted after the establishment of audit committees. According<br />

to Main and Johnston (1993), Laing and Weir (1999) and Weir and Laing (2000), the presence of remuneration<br />

committees is similarly positively-associated with improved performance of companies. Finally, Klein (1998) found<br />

a positive (though weak) relationship between the presence of a remuneration committee and company performance.<br />

These committees perform key functions; their presence contributes to the board’s monitoring role and<br />

enhances the confidence of investors, not only in the reliability and fairness of company financial statements, but<br />

also in the effectiveness of the corporate reward system (executive remuneration packages) and the quality of<br />

appointed directors.<br />

As a result, companies with such monitoring subcommittees are expected to outperform those without them. Hence:<br />

Hypothesis 3a:<br />

The number of board subcommittees established by the main board will be positively related to firm<br />

performance.<br />

The independence of board monitoring committees<br />

A number of authors have argued that non-executive or independent directors on board subcommittees are more able<br />

to exercise independent judgment and will therefore be more effective in performing their monitoring role (Laing<br />

and Weir 1999; Stapledon and Lawrence 1997). Being independent from company executive positions, outside<br />

directors are free to assess and evaluate management actions and judgments objectively as well as to make crucial<br />

business decisions based upon moral grounds (Vance 1983; Ellstrand et al. 1999).<br />

In the case of audit committees, independence will ensure that the financial viability and integrity of the<br />

company are maintained and the interests of shareholders are being properly safeguarded. Cotter and Silvester<br />

(2003) argue that “independent audit committees, thereby, can reduce agency costs by minimizing the opportunistic<br />

selection of financial accounting policies, and by increasing the credibility and accuracy of financial reporting” (p.<br />

214). There is some empirical support for this position with Weir, Laing and McKnight (2002) reporting that U.K.<br />

firms with superior performance have a higher percentage of non-executives on the audit committee. Erickson, Park,<br />

Reising, and Shin (2005) found a reduction in the negative impact of ownership concentration on the value of<br />

Canadian firms when the proportion of outside directors on the audit committee increases.<br />

The call for independence may similarly be applied to remuneration committees. “It is clearly important that the<br />

remuneration committee should be able to arrive at its decisions independently so that suitable reward packages are<br />

drawn up which would motivate and retain executive directors while protecting shareholder interests” (Laing and<br />

Weir 1999, p. 458). As stated by Weir and Laing (2001, p. 88), “given that the aim of the remuneration committee is<br />

to supervise the performance of the executive directors and to devise suitable reward packages, its effectiveness is<br />

likely to be related to its structure and membership. It would therefore be expected that the remuneration committee<br />

would be made up entirely of non-executive directors”. Perhaps more bluntly put by Williamson (1985, p. 313), who<br />

commented that “the absence of an independent remuneration committee is akin to an executive’s writing their<br />

employment contract with one hand and then signing it with the other”.<br />

455


Finally, the independent structure of the nomination committee is also considered to be crucial. Ellstrand et al.<br />

(1999) asserted that “a nominating committee that is composed of independent directors may be more likely to<br />

appoint other independent directors who will be vigilant in monitoring the CEO”. Support is provided by Shivdasani<br />

and Yermack (1999) who found that there is significant negative market reaction to and significantly lower<br />

cumulative abnormal stock returns when there is an involvement of the CEO in the selection of company directors.<br />

In relation to monitoring committees (audit, remuneration and nomination), the independence of each<br />

committee member is expected to improve the ability to both monitor and discipline company management, in turn<br />

reducing agency costs and increasing company performance. Hence:<br />

Hypothesis 3b:<br />

The independence of the audit committee will be positively related to firm performance.<br />

Hypothesis 3c:<br />

The independence of the remuneration committee will be positively related to firm performance.<br />

Hypothesis 3d:<br />

The independence of the nomination committee will be positively related to firm performance.<br />

Auditor quality<br />

An additional governance instrument through which shareholders can monitor managerial behaviour and<br />

effectiveness is the independently-audited annual report to shareholders, which includes an auditor’s review of the<br />

reliability of the financial statements prepared by management (Watts and Zimmerman, 1983). Auditors not only<br />

provide shareholders with independently-ratified financial statements, but can also discover issues through internal<br />

control systems, including fraud detection. This improves the credibility of financial statements issued by the<br />

audited companies leading to lower contracting costs with external suppliers and lenders and therefore, a lower cost<br />

of capital (Jensen and Meckling, 1976).<br />

The quality of audit services may be defined as ‘the market-assessed joint probability that a given auditor will<br />

both (a) discover a breach in the client’s accounting system, and (b) report the breach’ (DeAngelo 1981, p. 186).<br />

From a shareholders perspective, the quality of the audit is not observable and the shareholder is therefore able to<br />

make judgments based only on reputation of the auditor and possibly, price (where the fee is published). Given there<br />

is no direct measure of either audit quality or reputations, the majority of studies in this area have relied upon auditor<br />

size as a proxy measure for quality.<br />

Extant empirical research supports the positive relationship between auditor size and quality, indicating that the<br />

largest international audit firms provide above average quality of audit services (e.g. Palmrose 1988; Siew Hong and<br />

Wong 1993; Niemi 2004). Niemi (2004) in summarizing the literature in this area finds that large audit firms are<br />

associated with more accurate reports, lower litigation activity and greater compliance with GAAP reporting<br />

requirements. They are also seen to be able to better withstand pressure from clients, and due to their greater<br />

collateral, are seen to have more to lose in the case of an audit failure. This latter point was similarly revealed by<br />

DeAngelo (1981) who found that the probability of an auditor finding and reporting on a problem in the accounting<br />

system increased with audit firm size. Accordingly, auditor size as proxied by the participation of the ‘Big 4’ audit<br />

firms will be used as an indicator of higher-quality auditors, lowering agency and capital costs which will ultimately<br />

lead to better firm performance. Hence:<br />

Hypothesis 4:<br />

Auditor size will be positively related to firm performance.<br />

Control Variables<br />

A number of additional variables are included to control for other potential influences on the performance of firms.<br />

Firm size<br />

456


The firm’s market capitalisation is included to control for the potential effects of firm size on corporate<br />

performance. Short and Keasey (1999) proposed two major avenues through which this effect may occur. Firstly, a<br />

financing effect, in which larger firms find it easier to generate funds internally and to access funds from external<br />

sources, lowering the overall cost of capital. Secondly, large firms may create higher entry barriers, thereby reducing<br />

competition and benefitting from above-normal profits.<br />

Debt ratio<br />

The debt ratio is defined as the book value of total debt divided by total assets – and this influences company<br />

performance in two ways. Firstly, the presence of debt ensures that management decisions and the firm’s operation<br />

are being externally monitored by debt holders. Stiglitz (1985) contends that lenders, particularly banks, effectively<br />

perform a function of management supervision. Secondly, the use of financial leverage creates contractual<br />

obligations for managers to meet fixed future debt repayments, thereby reducing the funds available to management<br />

for discretionary consumption of perks; moreover, debt requires management to become more efficient to reduce<br />

both the probability of bankruptcy and the potential loss of their own reputation (Jensen and Meckling 1976;<br />

Grossman and Hart 1982; Jensen 1986).<br />

Industry classification<br />

Industry effects account for the nature of the competitive environment in which a firm operates, including, the<br />

number and size-dispersion of industry rivals and the rate of growth of the industry in general. Since performance<br />

may also depend on industry affiliations, a number of studies have included a dummy variable to capture these<br />

industry effects (Vafeas and Theodorou 1998; Ellstrand et al. 1999).<br />

Board size<br />

The disadvantages associated with large boards have been addressed by many authors. “When boards get beyond<br />

seven or eight people, they are less likely to function effectively and are easier for the CEO to control” (Jensen<br />

1993, p. 865). A board with “eight or fewer members engenders greater focus, participation, and genuine interaction<br />

and debate” (Firstenberg and Malkiel 1994, p. 34). According to Goodstein, Gautam, and Boeker (1994), strategic<br />

actions and changes are less likely to be initiated when there are a large number of board members. Yermack (1996),<br />

who first empirically documented a significant inverse relation between board size and firm performance,<br />

concluded that the costs associated with large boards (e.g. coordination, communication and director free-riding<br />

costs) are not sufficiently offset by its benefits alone. This study includes consideration of the number of directors on<br />

the board both as a control variable and also as an attempt to expand the literature linking board size with corporate<br />

performance.<br />

4 Sample Selection and Research Design<br />

To examine the relationship between governance and firm performance, 250 companies were randomly selected<br />

from a population of all companies listed in the Australian Stock Exchange in the 2005 financial year. The year<br />

2005 was chosen as it was a period of heightened interest in governance and governance structures. This followed<br />

from the collapse of companies such as Enron and Worldcom. Indeed the two years prior (2003and 2004) were<br />

characterized by the high level of activity in governance reviews and legislative changes. A study conducted in the<br />

year 2005 provided the first opportunity to assess governance and performance, incorporating mandatory changes in<br />

the areas of sub-committees, audit and composition. Finance-related companies including banking, insurance and<br />

trust companies were excluded from the sample, as their accounting reporting requirements and capital structure<br />

varies greatly from other companies and would distort the overall results. Missing data arises as a result of<br />

inadequate disclosure resulting in an inability to distinguish the role of CEO and board chair (Table 1).<br />

250 companies randomly selected from a population of all Australian companies listed on the ASX. Finance-related 250<br />

companies including banking, insurance and trust companies were excluded from the sample<br />

Identities of CEOs and/or Chair of Board not disclosed resulting in insufficient information to determine the company (11)<br />

leadership structure (CEO duality)<br />

Sample remaining after listwise deletion 239<br />

Table 1: Sample Size for the Governance-Performance Model<br />

457


In the corporate governance literature, the dependent variable firm performance has been measured as either:<br />

market-based (de Miguel 2004; Mura 2007); accounting-based (Dhnadirek and Tang 2003; Ng 2005); or both (Short<br />

and Keasey 1999; Bonn 2004; Guedri and Hollandts 2008). The majority of studies have followed the prescription<br />

of the Demsetz and Lehn (1985) study by using Tobin’s Q as the measure of firm performance. This is seen to have<br />

an advantage over accounting performance by incorporating a current perspective of the position of the firm (as<br />

determined by market price), rather than an historical perspective based on accounting results as measured by<br />

accounting conventions (Demsetz and Villalonga 2001). In accepting this approach, this study employs Tobin’s Q—<br />

which measures the degree to which the market values the firm above (or below) the book value of its assets—and<br />

provides an assessment of the efficiency with which management is utilising those assets. Table 2 shows the mean<br />

Tobin’s Q for the 2005 financial year (2.008), consistent with an earlier Australian survey of 114 listed companies in<br />

the financial year 1999-2000 which found a mean Tobin’s Q of 1.80 (Welch, 2003). Approximately 75% of<br />

Australian firms, report their Tobin’s Q to be higher than 1.0 in 2005.<br />

Table 2: Mean, Median and Quartile Range for Tobin’s Q for the Year 2005<br />

Country Min. Max. Mean Median Percentiles<br />

Australia 0.000 10.430 2.008 1.460 0.790 1.018 2.393 3.845<br />

Tobin’s Q is regressed against seven variables of corporate governance mechanisms to measure their impact on firm performance. The study,<br />

therefore, specifically tests the following model:<br />

Tobin’s Q = a + β1ProNED + β2Duality + β3Subcommittees + β4AuditIndp + β5RemuIndp + β6NomiIndp + β7Auditor + γControl Variables<br />

Where:<br />

Tobin’s Q =The sum of market value of equity and preferred shares and book value of total liabilities, divided by the book value of<br />

total assets.<br />

ProNED = The proportion of non-executive directors on the board<br />

Duality = A dummy variable is set equal to 1 if the positions of CEO and COB are either held by<br />

the same person or two different persons with the same family name and zero otherwise<br />

Subcommittees = An index scored from 0–3, with 1 point scored for the disclosure of each of three board<br />

subcommittees: Audit, Remuneration and Nomination<br />

AuditIndp = The proportion of non-executive directors in the Audit committee<br />

RemuIndp = The proportion of non-executive directors in the Remuneration committee<br />

NomiIndp = The proportion of non-executive directors in the Nomination committee<br />

Auditor = A dummy variable is set equal to 1 if the firm’s audit company is one of the following<br />

large audit firms: KPMG, E&Y, PWC or DTT, and zero otherwise<br />

Control Variables:<br />

Firm Size = The natural log of market capitalisation.<br />

Debt Ratio = Total liabilities as a proportion of total assets<br />

Industry = 1 for companies in mining and resource sectors; and 0 otherwise<br />

Board Size = The number of directors in the boards<br />

5 Results<br />

Two models are presented to address multicollinearity detected among the subcommittee and subcommittee<br />

independent variables.<br />

When both were included in the regression model an unacceptable high tolerance statistic of 10.741 was detected for<br />

the subcommittee variable (Tabachnik and Fidell, 1996).<br />

Model 1 (Table 3) produces the coefficients on the variables ProNED and Duality with expected signs—positive and<br />

negative, respectively— consistent with agency theory. Contrary to expectations, the number of subcommittees<br />

(Subcommittees) and auditor size (Auditor) produce a negative relationship with firm performance. Results are<br />

similar when the number of board monitoring committee’s variable (Subcommittees) is removed as shown in Model<br />

2. Among the control variables, the positive coefficients for firm size and debt variables are consistent with the<br />

arguments discussed earlier. A negative and statistically significant coefficient on board size suggests larger boards<br />

may not enhance company performance.<br />

Table 3: Regression analysis of Tobin’s Q on governance practices, firm size, debt ratio, industry and board size for listed Australian companies<br />

in 2005 (p-values in parentheses below coefficients)<br />

_______________________________________________________________________<br />

Variable a<br />

(1) (2)<br />

________________________________________________________________________<br />

458<br />

10 th<br />

25 th<br />

75 th<br />

90 th


Constant –1.110 –1.167<br />

(0.157) (0.131)<br />

ProNED 0.924* 0.987*<br />

(0.097) (0.067)<br />

Duality –0.449* –0.458*<br />

(0.095) (0.088)<br />

Subcommittees –0.135<br />

(0.639)<br />

_<br />

AuditIndp –0.369 –0.505<br />

(0.396) (0.118)<br />

RemuIndp –0.176 –0.309<br />

(0.667) (0.290)<br />

NomiIndp –0.324 –0.476<br />

(0.465) (0.113)<br />

Auditor –0.101 –0.095<br />

(0.646) (0.665)<br />

Firm Size 0.231*** 0.229***<br />

(0.000) (0.000)<br />

Debt Ratio 0.300 0.286<br />

(0.345) (0.365)<br />

Industry –0.190 –0.183<br />

(0.383) (0.399)<br />

Board Size –0.165** –0.159**<br />

(0.036) (0.040)<br />

________________________________________________________________________<br />

R 2<br />

0.135 0.135<br />

Adjusted R 2<br />

0.094 0.097<br />

F-Statistic 3.233*** 3.546***<br />

Sample Size 239 239<br />

Where: ***p < 0.01, **p < 0.05, and *p < 0.10<br />

Tobin’s Q =The sum of market value of equity and preferred shares and book value of total liabilities, divided by the book value of<br />

total assets.<br />

ProNED = The proportion of non-executive directors on the board<br />

Duality = A dummy variable is set equal to 1 if the positions of CEO and COB are either held by<br />

the same person or two different persons with the same family name and zero otherwise<br />

Subcommittees = An index scored from 0–3, with 1 point scored for the disclosure of each of three board<br />

subcommittees: Audit, Remuneration and Nomination<br />

AuditIndp = The proportion of non-executive directors in the Audit committee<br />

RemuIndp = The proportion of non-executive directors in the Remuneration committee<br />

NomiIndp = The proportion of non-executive directors in the Nomination committee<br />

Auditor = A dummy variable is set equal to 1 if the firm’s audit company is one of the following<br />

large audit firms: KPMG, E&Y, PWC or DTT, and zero otherwise<br />

Firm Size = The natural log of market capitalization.<br />

Debt Ratio = Total liabilities as a proportion of total assets<br />

Industry = 1 for companies in mining and resource sectors; and 0 otherwise<br />

Board Size = The number of directors in the boards<br />

Overall, the findings suggest an increase in the proportion of non-executive directors on boards will result in an<br />

increase in corporate value as a result of effective monitoring of duties performed by independent directors.<br />

Additionally, firms that employ the unitary leadership structure are shown to underperform those firms that assign<br />

two unrelated persons to the posts of CEO and chair. The roles of subcommittees, irrespective of their independence,<br />

and auditor quality did not produce significant results for this data.<br />

The results for the control variables of firm size, financial leverage and board size were also found to be consistent<br />

in direction across all models in both countries. Firm size and debt ratio are positively associated with performance,<br />

although the latter is not significant for this data, while board size produces a significant negative relationship.<br />

6 Conclusion<br />

The results of the influences of the board monitoring level on firm performance were inconclusive. The data provide<br />

no consistent evidence that regulatory pronouncements or shareholder agitation regarding establishment and<br />

independence of board subcommittees are effective in enhancing firm performance. Nevertheless, two exceptions<br />

were noted with respect to the NED representation and dual leadership structure which appear to be influential in<br />

improving firm performance.<br />

459


The proportion of non-executive directors was found to positively affect firm performance (H1) in line with the<br />

evidence on U.S. firms provided by Rosenstein and Wyatt (1990), on U.K. firms provided by Mura (2007), and on<br />

large Australian firms provided by Bonn (2004) and Bonn, Yoshikawa, and Phan (2004). This supports the<br />

contention that non-executives on the board enhance firm performance as they are able to independently scrutinise<br />

management action.<br />

The leadership structure was found to be in the predicted direction (H2), where firms with a non-executive chair<br />

outperform firms that assign a CEO as board chairman. The results found in this study are in line with others (e.g.<br />

Rechner and Dalton 1991; Pi and Timme 1993; Daily and Dalton 1994; Rhoades et al. 2001), who suggest that<br />

dividing these two positions enhances corporate performance.<br />

The remaining governance-related hypotheses were not supported in this study. The subcommittee variable<br />

(H3a) was not significant and contrary to expectation, was negatively-correlated to performance. Board committee<br />

composition (H3b, c, d ) was similarly not statistically significant and negative. However, this result mirrors the<br />

insignificant relationships generally discovered in previous studies into the influence of subcommittee independence<br />

on firm performance (Klein 1998; Vafeas and Theodorou 1998; Ellstrand et al. 1999; Cotter and Silvester 2003; Hsu<br />

2008). Audit quality as measured by the auditor size was not found to have any influence on firm performance.<br />

The control variables of firm size and debt ratio are positively associated with performance and board size<br />

produces a significant negative relationship. The firm size finding is particularly important in confirming the results<br />

of a number of international studies and lends support for the contention that large boards are inefficient.<br />

Overall, the results presented suggest that regulators concentration on board subcommittees and their<br />

composition would seem to be ineffectual in aligning the interests of managers and shareholders. No support could<br />

be found for the contention that an increase in the number of subcommittees and/or NED representation on those<br />

subcommittees would improve firm performance. This has been supported by a growing number of studies that<br />

failed to find a systematic relationship between the structure and composition of board subcommittees and corporate<br />

financial performance (Klein 1998; Vafeas and Theodorou 1998; Ellstrand et al. 1999; Cotter and Silvester 2003;<br />

Hsu 2008). If regulators continue to pursue an agenda of compulsorily requiring the establishment of<br />

subcommittees, or insisting on a higher proportion of non-executive directors on those subcommittees, it may be<br />

counterproductive to business generally. Further research is required to identify the causes of this disparity between<br />

theory and practical outcomes, before more costly regulation is enacted.<br />

The study does find support for NED representation on the board of directors and a dual leadership structure in<br />

improving firm performance. It is notable that 13% of the Australian companies did not achieve a majority nonexecutive<br />

director representation and 19% of Australian companies did not utilise a dual leadership structure. The<br />

study provides an argument that these companies should consider increasing the proportion of non-executive<br />

directors on the board and assigning two unrelated persons to the posts of CEO and chair of the board.<br />

This research could be extended upon in a number of ways. A replication of the study employing longitudinal<br />

data would assist in determining whether the relationships found were robust under differing economic conditions.<br />

Tobin’s Q was employed as the dependent variable in this analysis, to provide an assessment of the efficiency with<br />

which management is utilising assets. Future research could assess the impact of the governance mechanisms on<br />

other performance-related variables, including accounting-based measures (ROA, ROE and ROI) – which provide<br />

an historical view of performance based on accounting conventions. The possibility of using more refined measures<br />

to assess the characteristics of the board and its committee members including their experience and expertise—<br />

would enhance our understanding of the role of the committees and their ultimate impact on performance. This may<br />

include qualitative aspects which investigate the management process and directly interrogate the participants (for<br />

example, how influential are NEDs in the decision making process), rather than relying on reported board structures.<br />

7 References<br />

Bilimoria, D., and S. K. Piderit. 1994. Board committee membership: Effects of sex-based bias. Academy of<br />

Management Journal 37 (6): 1453–1477.<br />

Bonn, I. 2004. Board structure and firm performance: Evidence from Australia. Journal<br />

460


of the Australian and New Zealand Academy of Management 10 (1): 14–24.<br />

Bosch, H. 1995. The Director at Risk: Accountability in the Boardroom. Melbourne: FT Pitman Publishing.<br />

Boyd, B. K. 1995. CEO duality and firm performance: A contingency model. Strategic Management Journal 16 (4):<br />

301–312.<br />

Brickley, J. A., J. L. Coles, and G. Jarrell. 1997. Leadership structure: Separating the CEO and chairman of the<br />

board. Journal of Corporate Finance 3 (3): 189–220.<br />

Cadbury, A. 1992. Committee on the Financial Aspects of Corporate Governance. London: GEE.<br />

Cadbury Committee Report 1992. The financial aspects of corporate governance.<br />

Choi, J. J., S. W. Park, and S. S. Yoo. 2007. The value of outside directors: Evidence from corporate governance<br />

reform in Korea. Journal of Financial & Quantitative Analysis 42 (4): 941–962.<br />

Coles, J. W., V. B. McWilliams, and N. Sen. 2001. An examination of the relationship of governance mechanisms to<br />

performance. Journal of Management 27 (1): 23.<br />

Cotter, J., and M. Silvester. 2003. Board and monitoring committee independence. Abacus 39 (2): 211–232.<br />

Daily, C. M. 1994. Bankruptcy in strategic studies: Past and promise. Journal of Management 20 (2): 263.<br />

Daily, C. M. 1996. Governance patterns in bankruptcy reorganizations. Strategic Management Journal 17 (5): 355–<br />

375.<br />

Daily, C. M., and D. R. Dalton. 1993. Board of directors leadership and structure: Control and performance<br />

implications. Entrepreneurship: Theory & Practice 17 (3): 65–81.<br />

Dalton, D. R., C. M. Daily, A. E. Ellstrand, and J. L. Johnson. 1998. Meta-analytic reviews of board composition,<br />

leadership structure, and financial performance. Strategic Management Journal 19 (3): 269.<br />

de Miguel, Alberto. 2004. Ownership structure and firm value: New evidence from Spain. Strategic Management<br />

Journal 25 (12): 1199–1207.<br />

DeAngelo, L. E. 1981. Auditor size and audit quality. Journal of Accounting and Economics 3 (3): 183–199.<br />

Demsetz, H., and K. Lehn. 1985. The structure of corporate ownership: Causes and consequences. Journal of<br />

Political Economy 93 (6): 1155–1177.<br />

Demsetz, H., and B. Villalonga. 2001. Ownership structure and corporate performance. Journal of Corporate<br />

Finance 7 (3): 209–233.<br />

Dhnadirek, R., and J. Tang. 2003. Corporate governance problems in Thailand: Is ownership concentration the<br />

cause? Asia Pacific Business Review 10 (2): 121–138.<br />

Ellstrand, A. E., C. M. Daily, J. L. Johnson, and D. R. Dalton. 1999. Governance by committee: The influence of<br />

board of directors' committee composition on corporate performance. Journal of Business Strategies 16 (1):<br />

67–88.<br />

Elsayed, K. 2007. Does CEO duality really affect corporate performance? Corporate Governance: An International<br />

Review 15 (6): 1203–1214.<br />

Erickson, J., Y. W. Park, J. Reising, and H.-H. Shin. 2005. Board composition and firm value under concentrated<br />

ownership: the Canadian evidence. Pacific-Basin Finance Journal 13 (4): 387–410.<br />

Fama, E. F. 1980. Agency problems and the theory of the firm. Journal of Political Economy 88 (2): 288–307.<br />

Fama, E. F., and M. C. Jensen. 1983. Separation of ownership and control. Journal of Law & Economics 26 (2):<br />

301–326.<br />

Fich, E. M. 2005. Are some outside directors better than others? Evidence from director appointments by Fortune<br />

1000 firms. Journal of Business 78 (5): 1943–1971.<br />

Finkelstein, S., and R. A. D'Aveni. 1994. CEO duality as a double-edged sword: How boards of directors balance<br />

entrenchment avoidance and unity of command. Academy of Management Journal 37 (5): 1079–1108.<br />

Firstenberg, P. B., and B. G. Malkiel. 1994. The Twenty-first Century boardroom: Who will be in charge? Sloan<br />

Management Review 36 (1): 27–35.<br />

Gales, L. M., and I. F. Kesner. 1994. An analysis of board of director size and composition in bankrupt<br />

organizations. Journal of Business Research 30 (3): 271–282.<br />

Goodstein, J., K. Gautam, and W. Boeker. 1994. The effects of board size and diversity on strategic change.<br />

Strategic Management Journal 15 (3): 241–250.<br />

Grossman, S., and O. Hart. 1982. Corporate financial structure and managerial incentives.<br />

In The Economics of Information and Uncertainty, ed. J. McCall, 107–137. Chicago: University of Chicago Press.<br />

Guedri, Z., and X. Hollandts. 2008. Beyond dichotomy: The curvilinear impact of employee ownership on firm<br />

performance. Corporate Governance: An International Review 16 (5): 460–474.<br />

Hermalin, B. E., and M. S. Weisbach. 1991. The effects of board composition and direct incentives on firm<br />

performance. FM: The Journal of the Financial Management Association 20 (4): 101–112.<br />

461


Holmstrom, B., and S. N. Kaplan. 2003. The state of U.S. corporate governance: What’s right and what’s wrong?<br />

Journal of Applied Corporate Finance 15 (3): 8–20.<br />

Jensen, M. C. 1993. The modern industrial revolution, exit, and the failure of internal control systems. Journal of<br />

Finance 48 (3): 831–880.<br />

Jensen, M. C., and W. H. Meckling. 1976. Theory of the firm: Managerial behavior, agency costs and ownership<br />

structure. Journal of Financial Economics 3 (4): 305–360.<br />

Kesner, I. F. 1988. Directors' characteristics and committee membership: An investigation of type, occupation,<br />

tenure, and gender. Academy of Management Journal 31 (1): 66–84.<br />

Klein, A. 1998. Firm performance and board committee structure. Journal of Law and Economics 41 (1): 275–303.<br />

Laing, D., and C. M. Weir. 1999. Governance structures, size and corporate performance in U.K. firms.<br />

Management Decision 37 (5/6): 457.<br />

Lipman, F. D. 2007. Summary of major corporate governance principles and best practices. International Journal of<br />

Disclosure and Governance 4 (4): 309–319.<br />

Main, B. G. M., and J. Johnston. 1993. Remuneration committees and corporate governance. Accounting & Business<br />

Research 23 (91A): 351–362.<br />

Mallin, C. A. 2004. Corporate governance. Oxford: Oxford University Press, 2004.<br />

Mehran, H. 1995. Executive compensation structure, ownership, and firm performance. Journal of Financial<br />

Economics 38 (2): 163–184.<br />

Menon, K., and J. D. Williams. 1994. The use of audit committees for monitoring. Journal of Accounting & Public<br />

Policy 13 (2): 121–139.<br />

Mura, R. 2007. Firm Performance: Do non-executive directors have minds of their own? Evidence from U.K. panel<br />

data. Financial Management (Blackwell Publishing Limited) 36 (3): 81–112.<br />

NACD. 1996. A practical guide for corporate directors. Washington: National Association of Corporate Directors.<br />

Ng, C. Y. M. 2005. An empirical study on the relationship between ownership and performance in a family-based<br />

corporate environment. Journal of Accounting, Auditing & Finance 20 (2): 121–146.<br />

Niemi, L. 2004. Auditor size and audit pricing: Evidence from small audit firms. European Accounting Review 13<br />

(3): 541–560.<br />

Palmrose, Z.-V. 1988. An analysis of auditor litigation and audit service quality. Accounting Review 63 (1): 55.<br />

Pi, L., and S. G. Timme. 1993. Corporate control and bank efficiency. Journal of Banking & Finance 17 (2-3): 515–<br />

530.<br />

Rechner, P. K., and D. R. Dalton. 1991. CEO duality and organizational performance: A longitudinal analysis.<br />

Strategic Management Journal 12 (2): 155–160.<br />

Rhoades, D. L., P. L. Rechner, and C. Sundaramurthy. 2001. A meta-analysis of board leadership structure and<br />

financial performance: “Are two heads better than one”? Corporate Governance: An International Review 9<br />

(4): 311–319.<br />

Rosenstein, S., and J. G. Wyatt. 1990. Outside directors, board independence, and shareholder wealth. Journal of<br />

Financial Economics 26 (2): 175–191.<br />

Rosenstein, S., and J. G. Wyatt. 1997. Inside directors, board effectiveness, and shareholder wealth. Journal of<br />

Financial Economics 44 (2): 229–250.<br />

Sarbanes-Oxley Act 2002. The Public Company Accounting Reform and Investor<br />

Protection Act (H.R. 3763). http://www.whitehouse.gov/infocus/corporateresponsibility.<br />

Schmid, M. M., and H. Zimmermann. 2008. Should chairman and CEO be separated? Leadership structure and firm<br />

performance in Switzerland. Schmalenbach Business Review : ZFBF 60: 182–204.<br />

Shivdasani, A., and D. Yermack. 1999. CEO involvement in the selection of new board members: An empirical<br />

analysis. Journal of Finance 54 (5): 1829–1853.<br />

Short, H., and K. Keasey. 1999. Managerial ownership and the performance of firms: Evidence from the U.K..<br />

Journal of Corporate Finance 5 (1): 79–101.<br />

Siew Hong, T., and T. J. Wong. 1993. Perceived auditor quality and the Earnings Response Coefficient. Accounting<br />

Review 68 (2): 346–366.<br />

Stapledon, G. P., and J. Lawrence. 1997. Board composition, structure and independence in Australia's largest listed<br />

companies. Melbourne University Law Review 21 (1).<br />

Stiglitz, J. E. 1985. Credit markets and the control of capital. Journal of Money, Credit & Banking 17 (2): 133–152.<br />

Tabachnik, B. G., and L. S. Fidell. 1996. Using multivariate statistics (third edition). New York: Harper Collins.<br />

Vafeas, N., and E. Theodorou. 1998. The relationship between board structure and firm performance in the U.K..<br />

The British Accounting Review 30 (4): 383–407.<br />

Vance, S. C. 1983. Corporate leadership, boards, directors and strategy. New York: McGraw-Hill.<br />

462


Watts, R. L., and J. L. Zimmerman. 1983. Agency problems, auditing, and the theory of the firm: Some evidence.<br />

Journal of Law & Economics 26 (3): 613–633.<br />

Weir, C., D. Laing, and P. J. McKnight. 2002. Internal and external governance mechanisms: Their impact on the<br />

performance of large U.K. public companies. Journal of Business Finance & Accounting 29 (5/6): 579–611.<br />

Weir, C. M., and D. Laing. 2000. The performance-governance relationship: The effects of Cadbury compliance on<br />

U.K. quoted companies. Journal of Management and Governance 4 (4): 265–281.<br />

Weir, C. M., and D. Laing. 2001. Governance structures, director independence and corporate performance in the<br />

U.K.. European Business Review 13 (2): 86–93.<br />

Weisbach, M. S. 1988. Outside directors and CEO turnover. Journal of Financial Economics 20: 431–460.<br />

Welch, E. 2003. The relationship between ownership structure and performance in listed australian companies.<br />

Australian Journal of Management 28 (3): 287–305.<br />

Wild, J. J. 1996. The audit committee and earnings quality. Journal of Accounting, Auditing & Finance 11 (2): 247–<br />

276.<br />

Williamson, O. E. 1985. The economic institutions of capitalism: Firms, markets, relational contracting. New York:<br />

Fress Press.<br />

Yermack, D. 1996. Higher market valuation of companies with a small board of directors. Journal of Financial<br />

Economics 40 (2): 185–211.<br />

Ying-Fen, L. 2005. Corporate governance, leadership structure and CEO compensation: Evidence from Taiwan.<br />

Corporate Governance: An International Review 13 (6): 824–835.<br />

463


COST-BENEFIT ANALYSIS OF THE FINANCIAL STATEMENTS CONVERSION: A CASE STUDY<br />

FROM THE CZECH REPUBLIC<br />

David Procházka, University of Economics, Prague, Czech Republic<br />

Email: prochazd@vse.cz, https://webhosting.vse.cz/prochazd/<br />

Abstract. EU Regulation No. 1606/2002 obliges companies listed on the EU stock exchanges to prepare their consolidated financial<br />

statements in compliance with IFRS. Many Czech companies are under control of foreign companies (issuers of securities listed on<br />

EU capital markets). As the IFRS statements are not relevant for the tax purposes, Czech companies prepare financial statements<br />

according to the Czech legislative for statutory purposes. However, for consolidation purposes, they have to provide their parent<br />

companies with financial statements prepared in compliance with IFRS as adopted by the EU.<br />

The paper deals with three basic solutions, which can be applied when an entity is engaged in the conversion of financial statements<br />

from one set of accounting standards to another set of standards. The first method uses conversion on the financial statements level.<br />

The second method applies the conversion on the trial balance level. Finally, some companies prefer to implement specialised<br />

accounting software, which enables to record all accounting transactions twice – both according to CAS and IFRS. Each method is<br />

shortly described and its main cost-benefits are analysed. The theoretical findings will be supported by empirical evidence based on<br />

questionnaire scrutinising practical experience of selected Czech companies involved in the process of financial statements<br />

conversion. Moreover, a short description of expected future development is outlined.<br />

Keywords: Conversion of Financial Statements; IFRS; Czech Accounting Standards; Dual (Financial) Accounting System<br />

JEL classification: M41<br />

1 Introduction<br />

The adoption of the International Financial Reporting Standards has caused a radical change in financial reporting,<br />

esp. in countries with the code-law tradition of accounting regulation. Increased usefulness of financial statements<br />

prepared in accordance with the IFRS worldwide is evidenced by recent research. However, the implementation of<br />

the IFRS into national legislation elicits costs for many subjects involved in the process. E.g. many national<br />

legislations decree entities to prepare the IFRS statements for European stock exchanges and simultaneously to<br />

prepare the financial statements based on national accounting standards for statutory and/or tax purposes. As a<br />

consequence, entities have to maintain two different sets of accounting data. The conversion of financial statements<br />

from one set to another is a complex and costly process. The paper’s main aim is to evaluate recent development in<br />

this field in the Czech Republic.<br />

2 Background<br />

2.1 Literature overview<br />

The IFRS adoption has increased the quality of disclosed information comparing to national GAAP (Barth et al.,<br />

2008). The improvement in quality is significant across Europe (Aubert and Grudnitski, 2009) and esp. in countries<br />

with code law (Morais and Curto, 2007), where accounting is closely linked to taxation systems. The increase in<br />

value relevance is demonstrated in countries with significant level of discretion in financial reporting, such as Italy<br />

(Cordazzo, 2008) or Spain (Ferrer et al., 2009). The positive influence of the IFRS adoption is also evidenced in<br />

transitional countries, e.g. in Romania (Mustata et al., 2009), Poland (Jaruga et al., 2007), Russia (Bagaeva, 2009).<br />

The evidence confirming the value relevance of the IFRS is also available for countries, which traditionally focus on<br />

supplying the high-quality information for external users, such as United Kingdom (Christensen et al., 2009).<br />

As the previous summary of literature shows, the benefits of IFRS adoption are significant and can be traced in<br />

many areas of economy. On the other hand, not negligible costs are induced by the harmonisation process. Entities<br />

are obliged to prepare financial statements, which are usually more complex than under local GAAP; investors have<br />

to recustomise models computing intrinsic values of corporate securities, creditors need to adjust their risk<br />

assessment models, state authorities solve the problem how to ensure control of tax duty fulfilment under new<br />

accounting system, etc. Not all costs of the IFRS implementation are borne by those who take advantage of more<br />

464


useful and comparable financial statements. Therefore, a comprehensive cost-benefit analysis of the IFRS<br />

implementation is difficult because it is unfeasible to compare benefits of one group involved with costs of another<br />

group.<br />

2.2 Aims of the paper<br />

The paper focuses on firms preparing financial statements in compliance with the IFRS. More specifically,<br />

advantages, disadvantages and costs of various methods used for the preparing of IFRS financial statements will be<br />

scrutinised. The problem seems to be pretty trivial at the first sight. The method used and cost incurred should be<br />

very similar as under any else reporting system. However, we should have on mind that financial reporting is a<br />

subject of state regulation. Besides IFRS statements for stock exchanges, entities are also bound to prepare financial<br />

statements according to national accounting standards for statutory and/or tax purposes in many countries (e.g. the<br />

EU member states). As a consequence, entities face to a problem how to ensure the preparation of two sets of<br />

financial statements, usually substantially different, in order to meet all legal requirements with minimal costs. This<br />

issue will be referred as the conversion of financial statements further in the text.<br />

Advantages and disadvantages of various approaches to the conversion of financial statements will be<br />

evaluated. Further, a short description of regulatory framework for financial reporting within the European Union<br />

together with the analysis of accounting regulation in the Czech Republic will be performed. Finally, empirical<br />

evidence will be presented to support some theoretical findings.<br />

The Czech Republic as the “case country” has been chosen for two reasons. The conversion of financial<br />

statements is an accounting issue of great importance because about 40% of Czech companies prepare two sets of<br />

financial statements. Secondly, accounting profession has been striving to persuade the regulator of accounting in<br />

the Czech Republic (i.e. the Ministry of Finance) to undertaken certain measures and thus to improve rather<br />

unsatisfactory situation. After many years, the Ministry of Finance has reflected the effort of accounting profession<br />

and amended the Act on Accounting by enabling selected entities to apply the IFRS on a voluntary basis. The<br />

development in the Czech Republic can serve as an inspiration for the regulators of financial reporting in other<br />

countries, both in positive and negative sense. We can assume that countries, whose accounting regulation is based<br />

on the code law approach and accounting is subordinated to tax system requirements, face a similar problem.<br />

3 Conversion of financial statements<br />

The conversion of financial statements can be defined as a process when an entity prepares two or more sets of<br />

financial statements for external users, each in compliance with distinct financial reporting standards. Financial<br />

statements different from statutory accounts for bank credit purposes; financial statements based on national<br />

legislation for tax purposes by listed companies; statements according to foreign GAAP for the purpose of<br />

consolidation by parent company domiciled abroad or financial statements in accordance with generally accepted<br />

principles instead of local GAAP for stock exchanges are common examples of this conversion.<br />

There are three dimension of financial statements conversion:<br />

� preparation of the first individual financial statements (opening balance sheet respectively) prepared in<br />

compliance with alternative set of financial standards;<br />

� reporting of individual financial statements and other figures needed for consolidation or other purposes in<br />

regular intervals;<br />

� consolidation of individual financial statements.<br />

The number of particular steps in each mentioned phase may differ depending on the purpose of conversion.<br />

Whether a company makes conversion on its own or whether it uses a template (e.g. prepared by parent company) is<br />

another factor influencing the process of conversion.<br />

The choice of a proper method of financial statements conversion is relevant especially in case of frequent<br />

regular (e.g. monthly) reporting. No general advice, which solution to adopt, exists. Each entity should take into<br />

account its specific conditions and chose an approach mixing benefits and cost in the most favourable manner. The<br />

choice of conversion method appropriate for an entity depends on many factors, at least labour and ICT costs,<br />

number and type of differences, frequency of and deadlines for reporting should be taken into consideration. There<br />

are three basic approaches how to transform financial statements from one set of accounting standards to another set<br />

(Mejzlík, 2006):<br />

465


� conversion on financial statements level;<br />

� conversion on trial balance level;<br />

� dual accounting system.<br />

3.1 Conversion on financial statements level<br />

This method uses only the reclassification of items presented in financial statements. The main advantages of the<br />

method are:<br />

� easy and quick to implement;<br />

� no specialised ICT is needed;<br />

� low cost and labour burden;<br />

� easy to check the correctness of adjustments.<br />

The disadvantages are:<br />

� applicable only if the number of differences is low (no measurement, recognition, accounting policies<br />

issues);<br />

� workable only for those cases when classification is the only difference.<br />

3.2 Conversion on trial balance level<br />

In this case, the list of accounts (trial balance) based on CAS rules is exported from an accounting software and then<br />

adjustments are made in spreadsheets (like Excel). The following pros can be identified:<br />

� no specialised ICT is needed (data spreadsheets are sufficient),<br />

� applicable even if the number of differences is higher.<br />

The cons of this method are:<br />

� applicable only for difference in recognition of items (provisions, IFRS 3 recognised assets); however not<br />

workable for different measurement issues (work-in progress) and accounting policies (depreciation);<br />

� additional accounting expert for the IFRS should be employed => higher salary expenses;<br />

� conversion system is designed by the expert => his/her substitution in case of illness or termination of the<br />

job is questionable and sometimes even excluded;<br />

� testing of the correctness and conclusiveness of the „conversion bridge“ is complicated and causes<br />

additional problems esp. for auditors;<br />

� the consistency of adjustments within periods and data relationships (e.g. retained earnings) is hard to hold;<br />

� archiving of underlying documents outside the accounting system is an open issue.<br />

Moreover, this method is indecisive as far as meeting reporting deadlines concerns. There are no additional<br />

transactions to be recorded into the accounting system; however, after all transactions are recorded, the whole<br />

conversion process has to be carried out.<br />

Timely and error-free data recording is a limiting factor of this method. Most delays are caused because<br />

deadlines for recording of transactions are not held, e.g. missing transactions (due to delay of underlying<br />

documentation) are accounted for additionally and the whole conversion must be run once again. Predefined tables,<br />

macros and other automatic calculations may mitigate the negative consequences of those delays although not in all<br />

cases.<br />

3.3 Dual accounting system<br />

The accounting system is set up so it enables entities to record all transactions twice regarding on the different<br />

requirements of the both accounting system. There are following plusses of this method:<br />

� all types of differences could be included,<br />

� conclusiveness and objectivity is secured as all „adjustments“ are incurred directly in SW modules,<br />

� customisation of accounting SW is possible and more outputs for management are available,<br />

� proper if number of difference is very high,<br />

� possible integration with systems for consolidation reporting and other ICT systems.<br />

This method has following minuses:<br />

� implementation of a new or upgrade of current SW is needed => additional costs and changes in processes,<br />

� more transactions are recorded (more workload and additional employees => high labour costs),<br />

� question is how to record the transactions (all transaction to record twice in each module or to make special<br />

module for different transactions only),<br />

466


� way of archiving of the documents is not clear-cut (shall be documents numbered, ordered and stored<br />

according to local GAAP or IFRS or in a combined manner?).<br />

This method of conversion of financial statements brings uncertain results regarding the reporting deadlines.<br />

More transactions need to be recorded into accounting software. However, if all is recorded, additional adjustments<br />

are not needed, as both sets of financial statements can be exported directly from accounting software. Moreover,<br />

recording of additional transactions do not cause any serious delays, because updated IFRS statements can be<br />

retrieved from accounting software immediately.<br />

4 Implementation of the IFRS in the Czech Republic<br />

4.1 Regulation of financial reporting in the Czech Republic<br />

As a member state of the EU, the regulation of accounting in the Czech Republic shall conform to the EU<br />

legislation. The chief source of the EU guidance comes from the following documents.<br />

� Fourth Council Directive 78/660/EEC of 25 July 1978 based on Article 54 (3) (g) of the Treaty on annual<br />

accounts of certain types of companies (incl. subsequent amendments);<br />

� Seventh Council Directive 83/349/EEC of 13 June 1983 based on the Article 54 (3) (g) of the Treaty on<br />

consolidated accounts;<br />

� Regulation (EC) 1606/2002 of the European Parliament and of the Council of 19 July 2002 on the<br />

application of international accounting standards.<br />

The Directives were incorporated directly into the Act on accounting already in 1990s.<br />

“For each financial year starting on or after 1 January 2005, publicly traded companies governed by the law of<br />

a Member State shall prepare their consolidated accounts in conformity with the international accounting<br />

standards” in accordance with the Article 4 of Regulation (EC) 1606/2002.<br />

Regulation on International Accounting Standards ordains a duty to prepare consolidated financial statements<br />

by publicly traded companies. Member states may broaden the scope of entities, which are obliged/allowed to apply<br />

the IFRS (e.g. in individual financial statements of listed companies or in individual/consolidate financial statements<br />

of non-listed companies).<br />

The main means of accounting regulation is the code law. The Czech accounting and financial reporting is<br />

regulated by:<br />

� Act No. 513/1991 Coll., Commercial code;<br />

� Act No. 563/1991 Coll., on accounting;<br />

� Decree of Ministry of Finance No. 500/2002 Coll., amending some enactments of Act No. 563/1991 Coll.,<br />

on accounting, in the financial statements of business enterprises;<br />

� Czech Accounting Standards (further “CAS”) for business enterprises subject to Decree of Ministry of<br />

finance No. 500/2002 Coll.<br />

Act on accounting until 2010<br />

Despite the fact that regulations are generally binding in its entirety and directly applicable in all member states<br />

of the EU without any further implementation in the national legislation, the provisions of Regulation (EC)<br />

1606/2002 were incorporated directly in the Act No. 563/1991 Coll., on accounting. The obligation to prepare<br />

consolidated financial statements accoutring to the International Accounting Standards as adopted by the EU by<br />

companies listed on the EU capital markets is included in §23a, article 1. However, the Czech regulator of<br />

accounting went one step forward and set up a duty to prepare individual financial statements by affected<br />

companies. According to § 19, article 9 “entities, which are business companies and which are the issuers of<br />

securities publicly traded on a regulated market in the member states of the European Union, shall apply the<br />

International Accounting Standards as adopted by the EU for keeping their accounts and for preparation of<br />

financial statements”.<br />

In addition, IFRS can be applied in consolidated financial statements of non-listed companies voluntarily (§23a,<br />

article 2). No other entities were allowed to choose the IFRS on voluntary basis.<br />

Act on accounting from 2011<br />

After several years of effort by academics and practitioners, the Ministry of Finance proposed an amendment of<br />

Act on Accounting, which was approved by the Parliament in 2010. Starting from 2011, companies specified by the<br />

Act are allowed to select the IFRS as the basis for preparation of individual financial statements, which are accepted<br />

for statutory purposes. According to new §19a, articles 7 and 8, following three groups of entities may opt to use the<br />

IFRS in their individual financial statements:<br />

467


� parent companies preparing consolidated financial statements in compliance with the IFRS voluntarily<br />

pursuant to §23a, article 2<br />

� entities under control belonging to a consolidation group for which the consolidating company prepares<br />

IFRS consolidated statements<br />

� joint ventures belonging to a consolidation group for which the consolidating company prepares IFRS<br />

consolidated statements<br />

The amendment of the Act has changed the features of companies covered by Category II (see description<br />

further in the text). From 2011, the Category II entities can opt whether to prepare two sets of financial statements<br />

(both CAS and IFRS) or whether to prepare only one set of financial statements (only IFRS).<br />

4.2 Issues regarding the implementation of Regulation (EC) 1606/2002<br />

As far as financial reporting concerns, three groups of Czech companies can be recognised. To summarise,<br />

provisions of the Czech Act on Accounting distinguished following groups of companies until 2010:<br />

� Category I (big Czech companies that are publicly traded on stock exchanges in the EU markets – IFRS<br />

reporting only):<br />

These entities have both to account for their transactions and to prepare their financial statements (both<br />

individual and consolidated) using the IFRS. These companies are not required to prepare their financial statements<br />

according to the Czech Accounting Standards (further “CAS”) as financial statements prepared in accordance with<br />

the IFRS are also accepted for the statutory purposes.<br />

� Category II (Small and medium-sized enterprises – both CAS and IFRS reporting):<br />

This category covers a diverse group of companies. The common feature of Category II is that companies in<br />

question are not a direct issuer of publicly traded securities. Nevertheless, their owners are such issuers. Therefore,<br />

the companies belonging to this category shall prepare their individual financial statements in accordance with the<br />

CAS for statutory purposes. In addition, they shall provide their parent companies with financial statements and<br />

other information needed for consolidation in compliance with the IFRS. Act on accounting did not permit any<br />

voluntary application of the IFRS instead of the CAS by this kind of entities. In this group companies preparing<br />

consolidated financial statements pursuant to §23a, article 2 voluntarily may be also subsided.<br />

� Category III (Small and medium-sized enterprises – only CAS reporting):<br />

Category III covers family owned companies and other companies that are neither direct, nor indirect issuer of<br />

publicly traded securities. They shall account for and report in accordance with the CAS (again without possibility<br />

to apply the IFRS voluntary).<br />

Provisions of Act on accounting required mandatory application of the IFRS not only in consolidated financial<br />

statements of listed companies (pursuant to Regulation 1606/2002), but also mandatory application of the IFRS in<br />

their individual statements. Individual financial statements are accepted for statutory purposes levied by the<br />

Commercial Code and are submitted to Business Register. That means, that listed companies (Category I<br />

companies) are not engaged in the process of financial statements conversion, as both individual and consolidated<br />

financial statements are prepared according to the IFRS and no additional set of financial statements prepared in<br />

accordance with the Czech Accounting Standards (CAS) is necessary.<br />

Conversion of financial statements is an important issue for companies covered by Category II. The majority of<br />

Czech companies are not directly listed on stock exchanges (there are only 60 issuers on Prague Stock Exchange).<br />

According to Act on accounting, all non-listed companies had to keep their accounts and prepare their individual<br />

financial statements in accordance with the CAS. However, about 40% of Czech companies are under control of<br />

foreign owners. A lot of them are domiciled in Germany, Netherlands, Austria and other EU member states and they<br />

are often listed on stock exchanges. For the consolidation purposes, Czech companies must provide their parent<br />

companies with the IFRS financial statements.<br />

As a voluntary application of the IFRS in individual financial statements had not been allowed till the end of<br />

2010, affected companies faced a problem of financial statements conversion. Statutory accounts were held in<br />

compliance with the CAS; and consequently statutory statements had to be converted into IFRS statements.<br />

The conversion is not a trivial issue as a huge number of differences between CAS and IFRS exist. PwC (2009)<br />

published a comprehensive analysis, which comprises differences between the IFRS and CAS on 80 pages.<br />

468


Therefore, the decision, which method of conversion to use, needs a deeper analysis by an entity’s management. All<br />

relevant advantages, disadvantages, possible benefits and cost restrains should be taken into account.<br />

The method of conversion on financial statements level is not appropriate for the vast of Czech companies, as<br />

differences between CAS and IFRS are not insignificant. Remaining two approaches are therefore favoured by<br />

Czech companies. The second method of financial statements conversions (on trial balance level with usage of<br />

spreadsheet applications like Excel) represents “golden middle way”, as benefits and costs are balanced for the<br />

majority of Czech companies reporting both under CAS and IFRS. Low level of conclusiveness together with the<br />

dependence usually on the only one expert responsible for the conversion is offset by significant ICT cost savings,<br />

because no specialised software is necessary under this approach. The last method (dual accounting system) is<br />

applied by those Czech companies belonging to big supranational consolidation groups which use the same<br />

accounting and reporting system for all group companies. Higher ICT and labour costs are counter-balanced by two<br />

dataset of information. Moreover, the conclusiveness and consistency of accounting records is a valuable asset of<br />

this method.<br />

The impossibility to apply the IFRS voluntary produces high social costs regardless, which method of<br />

conversion is chosen by entities. Scarce economic resources have to be employed in non-productive use. Academics<br />

and accounting profession tried therefore persistently to persuade the Ministry of Finance to amend the Act on<br />

accounting by enabling voluntary application of the IFRS in individual financial statements by Category II<br />

companies. The Ministry of Finance finally recognised this proposal to be justified. Starting from 2011, Czech<br />

companies, which are consolidated entities in the context of Regulation 1606/2002, can chose to prepare their<br />

individual financial statement in accordance with the IFRS. In case of optional application of the IFRS, financial<br />

statements conversion is not an issue anymore. However, companies may decide to maintain current status quo and<br />

to prepare their individual statements further under CAS principles.<br />

New provisions of the Act were enacted in December 2010. How many entities will utilise amendments of the<br />

Act is still uncertain. As the implementation of new accounting software is a quite complicated project, it is highly<br />

improbable that any companies have switched to the IFRS already from January 2011. First empirical evidence will<br />

not be available sooner than next year. Author of the paper carried out a quick empirical pre-research to evaluate the<br />

readiness of entities, external accounting firms, firms developing accounting software and auditors for the IFRS<br />

transition. A questionnaire containing two sets of questions was answered by ten accounting or auditing firms. The<br />

first group of questions relates to the possible extent of differences between the IFRS and CAS among various types<br />

of companies.<br />

Table 1: Differences between IFRS and CAS financial statements<br />

Insignificant Rather insignificant Rather significant Significant<br />

Manufacturers 0% 0% 60% 40%<br />

Merchants 80% 20% 0% 0%<br />

Services 0% 50% 40% 10%<br />

Source: Authorial survey<br />

The second group of questions focuses on evaluation of readiness for the IFRS adoption by various subjects.<br />

Table 2: Readiness for voluntary application of IFRS by non-listed companies<br />

Certainly not Rather not Rather yes Certainly yes<br />

Entities 0% 10% 40% 50%<br />

Accounting firms 10% 30% 50% 10%<br />

Auditors 0% 50% 50% 0%<br />

Software firms 10% 80% 10% 0%<br />

Source: Authorial survey<br />

Despite the fact, that the research is not fully representational because of restricted size of the sample, certain<br />

tendencies can be derived from the respondents’ answers. The reactions to the first set of questions affirm a general<br />

conclusion about high number of differences between the IFRS and CAS. According to professional accountants and<br />

auditors, this issue is relevant esp. for manufacturing companies (which create a significant part of Czech gross<br />

domestic product). In addition, differences are an issue for entities providing services, which usually struggle with<br />

469


evenue recognition as there is no guidance on this issue (neither general, nor for the construction contracts) in CAS<br />

and revenue recognition is mainly influenced by legal and tax matters.<br />

As far as readiness for voluntary application of the IFRS concerns, companies are on the top of the list.<br />

Companies, currently preparing both sets of financial statements, should have relatively smaller difficulties when<br />

shifting from the CAS statutory accounts to the IFRS. On the other side, it is believed that Czech software firms are<br />

not ready for the transition, which could bring problems for companies considering the voluntary IFRS adoption.<br />

ICT solutions for keeping accounts according to the IFRS are offered by foreign software developers (such SAP,<br />

etc.). However, costs of this solution can be prohibitive for affected companies, esp. for medium sized enterprises.<br />

The replacement of local accounting software by ERP systems results in increased annual IT expenses up to ten<br />

times in Czech companies according to the author’s experience.<br />

The possible advantages from a voluntary shift to the IFRS in statutory accounting may be evaluated with<br />

reference to experience of the Czech listed companies, which have to apply the IFRS obligatory both in consolidated<br />

and individual financial statements. There are about 60 issuers on Prague Stock Exchange, from which were<br />

excluded some issuers such as public sector institutions, financial institutions and issuers with domicile located<br />

abroad. Representatives of twenty three companies remaining in the sample were asked for evaluating benefits and<br />

costs of IFRS implementation in their companies. The benefits mentioned by already-adopters may serve as a useful<br />

source of reference for those companies, who are considering utilising a new provision of Act on accounting, which<br />

allows voluntary IFRS adoption in individual financial statements by designated entities. The answers of<br />

respondents on benefits are summarized in Table 3.<br />

Table 3: Benefits from IFRS implementation on companies’ level<br />

Certainly yes Rather yes Rather not Certainly not<br />

Easier access to financing by share capital 73% 27% 0% 0%<br />

Easier access to financing by bonds 9% 55% 27% 9%<br />

Easier access to financing by bank credits 0% 45% 36% 18%<br />

Easier reporting within consolidation group 64% 36% 0% 0%<br />

Increased relevance of data for management 0% 27% 55% 18%<br />

Increased credibility for our trade partners 0% 64% 27% 9%<br />

Increased reputation for general public 0% 45% 55% 0%<br />

Source: Authorial survey<br />

Once again, presented table cannot be supposed to represent generally valid inferences due to a restricted<br />

sample. However, it can be said that results are not surprising for the most of answers and they correspond to<br />

general expectations. As the IFRS are compulsory for listed companies, the connection between the IFRS<br />

implementation and the possibility to raise share capital financing is very close. A successful issuance of bonds is<br />

also supported by the IFRS adoption, as capital markets are the only source of available debt funds even in such an<br />

undeveloped capital market like in the Czech Republic. Banks possess more tools for evaluating financial health of<br />

applicants for bank credits; therefore IFRS adopters do not perceive the shift to IFRS to be really relevant for this<br />

purpose. Based on personal talks with some representatives, IFRS statements play no role in banks’ assessment<br />

whether to grant a company with requested credit or not. Nevertheless, IFRS statements are said to reduce costs of<br />

preparing applications forms and other documentation required by bank significantly.<br />

The IFRS implementation is not supposed to enhance data relevance for management purposes. It is a quite<br />

interesting outcome, when taking into account relatively low relevance and usefulness of accounting principles set<br />

up by Czech national accounting legislation. There are two possible explanations to this phenomenon. Firstly,<br />

managers are accustomed to the former CAS, which they used for years, and their do not understand and/or do not<br />

believe in data based on the IFRS. Secondly, due to weakness of the CAS companies were forced to developed highquality<br />

management accounting systems removing those weak points and supplying relevant data for decisionmaking.<br />

Therefore, a shift to IFRS has not produced any significant increase in data relevance from managers’ point<br />

of view. This finding needs further attention and scrutiny.<br />

Representatives are indecisive regarding the influence of IFRS statements and annual reports on company<br />

reputation in the eyes of general public. On the other side, IFRS statements are supposed to be welcomed by trade<br />

partners. They are usually applied for credit scoring of customers and other risk management procedures. Finally,<br />

470


the IFRS implementation brought substantial advantages in the process of preparing consolidated financial<br />

statements. Representatives of all listed companies assert that the IFRS adoption has eased reporting within<br />

consolidation group. This may be the crucial supporting element for those Czech non-listed companies when<br />

evaluating whether to adopt the IFRS voluntarily in individual financial statements as it is allowed by Act on<br />

accounting starting from 2011.<br />

5 Conclusions<br />

If the IFRS are not allowed to be applied voluntarily in individual financial statements of listed companies or in<br />

individual statements of non-listed companies belonging to a group consolidated in compliance with the IFRS, the<br />

necessity of financial statements conversion occurs. The second case is a common practice in the Czech Republic<br />

because Act on accounting did not enable non-listed entities to use the IFRS on an optional basis until 2010. The<br />

financial statements conversion is therefore an issue for almost 40% of Czech companies. To keep accounts<br />

according to two sets of relatively difference accounting standards in single accounting software is very costly<br />

matter. Therefore, most entities have chosen to make the conversion through Excel and similar data spreadsheets on<br />

trial balance level.<br />

Such an approach reduces costs significantly; however accuracy and transparency of the conversion process<br />

heavily rest on the abilities of a single expert who are in charge of the conversion. The conclusiveness of such<br />

conversions is not high in general. The author of paper has experience from a significant number of companies, of<br />

which financial statements converted from CAS to IFRS using trial balance method of conversion are not in the<br />

compliance with all provisions of IFRS. Spreadsheets are not constructed to cope with all nuances of double-entry<br />

accounting. With rising number of differences between the CAS and IFRS, omissions and computing mistakes are<br />

common feature of this method of financial statement conversion. Mistakes and omissions remain even after being<br />

checked by auditors. As a consequence, the quality of consolidated financial statements presented by parent<br />

companies may be severely impaired, because a lot of Czech companies create a significant part of consolidated<br />

figures. In my opinion, this is an issue of a fundamental importance not only for the Czech Republic, but also<br />

worldwide. Nevertheless, this issue is not really addressed by either current practice or research.<br />

In the context of previous doubts, the amendment of Act on accounting by the end of 2010, which enables<br />

selected companies to apply the IFRS voluntarily, shall be welcomed. By allowing companies to apply the IFRS in<br />

their individual statements, the Czech Republic follows pattern of financial reporting used e.g. in the Netherlands,<br />

Denmark, Ireland, etc. The amendment should lead to presenting accounting information, which is more useful for<br />

public. The second favourable effect would be the reducing in costs connected with recording transactions and<br />

preparing financial statements under two different set of accounting standards. Finally, the risk of errors contained in<br />

consolidated financial statements would substantially decrease, because the voluntary IFRS adoption in individual<br />

financial statement means that all records are kept within accounting software and not outside in spreadsheets.<br />

Claims to prepare individual statements compulsory according to national accounting legislation levy high costs<br />

on companies and have other negative economic consequences. National regulator of accounting should enable<br />

affected companies to prepare their individual statements on alternative basis, mostly in compliance with the IFRS.<br />

The development in the Czech Republic can serve as a source of inspiration for all countries solving the relationship<br />

of financial reporting standards applied for consolidated and individual financial statements. However, more robust<br />

empirical evidence will be available earliest in 2013, as first companies will probably switch to the IFRS from year<br />

2012 (as the Act was amended in December 2010).<br />

There are some restrictions impairing inferences of this study. Firstly, not all companies are allowed to apply<br />

the IFRS voluntarily. Only entities, which are subject of a full consolidation under the IFRS, are allowed to take<br />

advantage of the option offered by the amendment of Act on accounting. As a consequence, companies classified as<br />

investments in associate and consolidated by equity method are excluded from this option.<br />

The crucial problem is, although, that Czech accounting is closely linked with the taxation system. For the<br />

computation of current income tax, only net income according to the CAS is relevant. Therefore, all companies<br />

regardless whether they prepare individual financial statements according to the IFRS compulsory or voluntarily<br />

have to keep evidence of taxable income based on the CAS. Without releasing financial accounting from the income<br />

471


tax law, the conversion of financial statements will remain a common practice for Czech companies. Author of the<br />

paper is a member of a researcher team that is currently working on a study evaluating various approaches how tax<br />

system can be separated from the financial reporting. The future conclusions of this study can be valid for all<br />

countries, in which state regulator carries out the regulation of financial reporting mainly for the tax purposes.<br />

6 Summary<br />

The paper discusses various approaches to the conversion of financial statements and their advantages and<br />

disadvantages are evaluated. This issue is relevant especially for countries, which have adopted the IFRS and<br />

concurrently companies are required to prepare complementary set of financial statements according to national<br />

GAAP. The empirical evidence in case of Czech non-listed companies is provided in the paper.<br />

7 Acknowledgements<br />

The paper is processed as an output of a research project „ Adoption of the IFRS and Its Influence on the Mobility of<br />

Labour and Capital" supported by the Internal Science Foundation of the University of Economics, Prague<br />

(registration number F1/4/2011).<br />

8 References<br />

Aubert, F. – Grudnitski, G. (2009). The Importance and Impact of Mandatory Adoption of International Financial<br />

Reporting Standards in Europe. Tampere, 32nd Annual Congress of the European Accounting Association, 12.<br />

5. 2009 – 15. 5. 2009.<br />

Bagaeva, A. (2009). The IFRS and Accounting Quality in the Transitional Economy: A Case of Russia. Tampere,<br />

32nd Annual Congress of the European Accounting Association, 12. 5. 2009 – 15. 5. 2009.<br />

Barth, M. E. – Landsman, W. R. – Lang, M. H. (2008). International Accounting Standards and Accounting Quality.<br />

Journal of Accounting Research, 2008, vol. 46, is. 3, p. 467–498.<br />

Christensen, H. B. – Lee, E. – Walker, M. (2009). Do IFRS Reconciliations Convey Information? The Effect of<br />

Debt Contracting. Journal of Accounting Research, vol. 47, is. 5, p. 1167–1199.<br />

Cordazzo, M. (2008). The Impact Of IAS/IFRS On Accounting Practices: Evidences From Italian Listed Companies.<br />

Rotterdam, 31st Annual Congress of the European Accounting Association, 22. 4. 2009 – 25. 4. 2009.<br />

Ferrer, C. – Callao, S. – Jarne, J. I. – Lainez, J. A. (2009). IFRS Adoption in Spain and the United Kingdom: Effects<br />

on Accounting Numbers and Relevance. Tampere, 32nd Annual Congress of the European Accounting<br />

Association, 12. 5. 2009 – 15. 5. 2009.<br />

Jaruga, A. – Fijalkowska, J. – Frendzel, M. – Jaruga-Baranowska, M. (2007). The Impact of IAS/IFRS on the<br />

Accounting Regulations and Practical Implementation in Poland. Lisbon, 30th Annual Congress of the<br />

European Accounting Association, 24. 4. 2009 – 27. 4. 2009.<br />

Mejzlík, L. (2006). Accounting Information Systems [Text in Czech: Účetní informační systémy]. Prague:<br />

Oeconomica Publishing house, 2006.<br />

Morais, A. – Curto, J. D. (2007). IASB Standards Adoption: Value Relevance and the Influence of Country-Specific<br />

Factors. Lisbon, 30th Annual Congress of the European Accounting Association, 24. 4. 2009 – 27. 4. 2009.<br />

Mustata, R. – Matis, D. – Dragos, C. (2009). The Challenges of Accounting Harmonisation: Empirical Evidence of<br />

the Romanian Experience. Tampere, 32nd Annual Congress of the European Accounting Association, 12. 5.<br />

2009 – 15. 5. 2009.<br />

PricewaterhouseCoopers Audit (2009). IFRS and Czech Accounting Standards – Similarities and Differences [Text<br />

in Czech: IFRS a české účetní předpisy – podrobnosti a rozdíly]. Prague: PricewaterhouseCoopers Audit, 2009.<br />

472


NEGOTIATING SUCCESSION STRATEGY IN FAMILY RUN SME’S<br />

Christopher Milner<br />

Dept. Strategy & Business Systems, University of Portsmouth<br />

Richmond Building, Portland Street, Portsmouth, PO1 3DE, UK<br />

Abstract. The entrepreneurial founders of family run organisations often choose intergenerational succession as their preferred exit<br />

strategy. Statistics suggest that only 30% manage the succession process successfully. This paper tracks and analyses the experience<br />

of six organizations, highlighting those factors which appear to support a successful transition. The comparison between current<br />

thinking and founder perception brings the clear identification of gaps that hold the potential to serve as reasoning for failure, this<br />

paper focusing upon the suggested and perceived attributes and methods of preparation of the successor. Thorough analysis allows for<br />

the building of a contextual model of Intergenerational succession and a set of recommendations to serve family organisations to bring<br />

a successful and sustainable transfer of its leadership.<br />

Many founders choose intergeneration succession as their exit strategy, wanting their children to take over at the<br />

helm of the organisation. Literature surrounding the succession process in family run companies is thin and<br />

fragmented, and the limited attention it has been given seems to have a negative slant, understandable considering<br />

many with a history of success fail to survive past the tenure of the founding generation (Birley, 1986; Kets de<br />

Vries, 2001; Morris 1997 et al). This paper focuses on the gap between what previous research suggests as best<br />

practice against that of founder perception, with particular interest towards the preparation of the successor.<br />

Stepping down can prove much harder for founding owners, than their professional orientated ‘employed’<br />

counterparts, as they can perceive their businesses as an extension of themselves. The choice of exit strategies<br />

available can depend on a number of variables; these including the entrepreneurs’ values and wants for the long term<br />

future of the company, the operational structure and financial performance, while the state of the economy and<br />

market the organisation operates in is central to the options available.<br />

According to Sharma et al. (2001) the single most cited obstacle to effective succession is the incumbent’s<br />

inability to let go. With careful implementation and management of the succession plan, handover brings in the new<br />

chapter, and should involve a process of mutual role adjustment (Handler 1990, 1992), where the incumbent’s role<br />

gradually diminishes whilst the successor’s gradually increases. The author suggests that the transition should be<br />

looked upon from evolutionary perspective through a balance of internal and external method, however there are<br />

cases where a revolutionary is necessary due to the lack willingness form the founder’s side to let go of the<br />

organization they have themselves built. There are a number of variables that collide at this critical point in the<br />

process and so the management and planning for such is vital. The founder’s willingness to let go and the<br />

successor’s desire and ability to take over should have been increased over the succession process.<br />

1. Literature Review<br />

The attributes of the incumbent, the successor, and the relationship between them serve as the foundation and<br />

backbone of the potential success of the process, Handler (1992) reports however that very little is known about the<br />

role played by either. Do they have trust and mutual respect for an effective transfer of tacit knowledge? Handler<br />

(1990) suggests that mutual respect and understanding between generations is crucial; do they have a realistic<br />

appreciation for the skills and abilities of what one has done and of what the other can do? Do they share similar<br />

views to the current and future needs and strategic positioning and direction of the company, and is it important to<br />

do so? Is the incumbent willing to truly hand over the reins, and does the successor have the competence, ability,<br />

education and knowledge that may be necessary, with the passion and motivation to take them?<br />

Founders of organisations have an entrepreneurial spirit, a need to be one’s own boss and build success though<br />

their own direction. Kirzner (1985) describes them as individuals who demonstrate the alertness, character and<br />

temperament needed to exploit a discovered opportunity, characterised by innovation, risk-taking, creativity and<br />

growth, while Berry (1998) defines a firm run by an entrepreneurial founder ‘an independent enterprise, managed in<br />

a personalised way’.<br />

473


The characteristics of entrepreneurs can be divided into the instinctive, intuitive and impulsive ‘charismatic’ and<br />

the more conservative and realistic ‘pragmatic’, these distinguished through the variables brought together from<br />

research done by McCarthy (2001). The author suggests that a charismatic character with an obsessive nature and<br />

less business acumen, is more likely to view the intergenerational succession process in a simplified state, and may<br />

be more likely to be unwilling to let go of the organisation during the final stages of any future succession. In fact<br />

the incumbent’s motivation and willingness to step down can prove to be a major obstacle, Dyer (1986), Handler<br />

(1990), Lansberg (1999) & McGiven (1978) all suggest the importance of the predecessors ability to overcome<br />

anxiety about succession, move beyond the denial stage, and being willing to confront succession and let go is<br />

crucial to the success of the process. They must face normal fears such as losing control, power, and even part of his<br />

or her identity and stature in the community (Potts et al. 2001)<br />

As the individual being groomed to take over the management and leadership of an organisation, the successor<br />

could be described as the key to the long term operational and financial future of the company. They are a central<br />

figure to discovering what is necessary in order to successfully negate through an intergenerational succession<br />

process.<br />

The depth of literature on intergeneration successors is relatively thin, but the ability, in terms of their potential<br />

leadership and business management is often questioned. It could be stated that the company, depending on the<br />

position within its lifecycle, would benefit running as a professionally structured organisation with the appointment<br />

of a more highly skilled, educated and experienced manger, rather than a family orientated one, but research shows<br />

that family businesses are idiosyncratic to a high degree (Nooteboom, 1993; Pollak 1985) and are more likely to<br />

appoint their potentially less competent offspring. The author would though suggest that there is high value in<br />

mentoring and on the job training, which could result in contextual competence of the business, so whilst potentially<br />

not as formally educated as a manager sourced externally, may not be any less competent, but hold competence of a<br />

different type. Although theorists (Dyer et al,. 1986) have stated that the main cause of intergeneration succession<br />

failure is the founding incumbent not letting go of the organisation, the successor also holds a great responsibility;<br />

do they hold the ability, suitability and motivation to run the business, have they been properly prepared, and does<br />

the founder have realistic expectations of them? Lee, Lim & Lim (2003) have written that when personal and<br />

emotional factors determine who the next leader will be, poorly qualified successors can prove detrimental to<br />

success and short and long term stability of the organisation.<br />

Morris et al. (1997) and Chrisman et al. (1998) note the importance of experience, suggesting a successor who<br />

has worked within the family firm, as well as other organisations will be better prepared. Next generation family<br />

members are often working full time in the business, building organisation knowledge, but according to<br />

Longenecker & Schoen (1978) they tend to characterise their role as limited but functional.<br />

Whilst the theoretical and founders perception of how vital it is to train, educate and build the tools necessary to<br />

run an organisation may prove to be different, one potential central problem that has been researched by Howorth<br />

and Ali (2001), is that a number of patriarchal family owners fail to prepare the successor to succeed, and so<br />

possibly putting their children in a very difficult and stressful position, and putting the future stability of the<br />

organisation and prospective success of the succession process at risk. Whether this failure is due to lack of<br />

appreciation of the importance of a well-structured succession plan to bring the successor ability and confidence, or<br />

the founder having an unrealistic view of their children’s attributes and abilities, it is understandable to imagine why<br />

some successors may find it difficult running the company, or to gain the respect and credibility of the staff.<br />

Founders can have a large bearing on the culture of an organisation; the closer the founder’s and successor’s<br />

thoughts on the objectives, strategy, values and ethics, the smoother the transition could be. Miller, Steier and Le<br />

Breton-Miller’s (2003) research suggests three types of succession, depending on the actions of the successor once<br />

they take over the management and leadership of the organisation; conservative, wavering and rebellious.<br />

While some literature views succession as a short term handover, others see it as a multi stage long term process<br />

(Handler, 1990), but a process Lansberg (1999) argues few family firms are capable of successfully completing.<br />

Dyck et al (2002) analogy of the 4x100 relay race brings together the elements of sequence , timing , baton passing<br />

technique and communication , while Miller, le Breton-Miller & Steiner’s (2004) integrative model of succession<br />

succeeds in bringing a fuller picture to light. This model suggesting that respect, understanding and corporation is<br />

vital between the founder and successor, supporting the thinking that the founder must be willing to let go and the<br />

474


successor, if they have the motivation and commitment, alongside the ability and competence the choice whether<br />

they want to take over.<br />

Alongside identification of the perceived necessary attributes, it is important to pay attention to the ways in<br />

which these attributes are developed and facilitated. While theory suggests that education is positively correlates to<br />

a smooth transition and post succession performance (Morris et al. 1997) and that sets a foundation for areas<br />

including business strategy and marketing; and that internal training (Neubauer & Lank, 1998), knowledge transfer,<br />

and a growing involvement within the organisation (Cabrera-Suarez et al. 2001) and outside work experience<br />

(Barach & Gantisky, 1995) are all important.<br />

2. Data and Methodology<br />

The data collection process followed a unique but appropriate two-stage format. Due to existing business<br />

relationships, it was possible to conduct initial, informal conversations with participants over a period of months.<br />

These were followed by a final, formal semi-structured and open-ended interview. The interviews are used as the<br />

main source of primary data collection; generating rich and detailed accounts of the individual’s experience,<br />

perceptions, plans and thinking.<br />

The interviews have been transcribed, case study summarised, the results analysed, a summary matrix<br />

formulated, enabling a clear comparison between current theory and founder perceptions. Whilst these will are not<br />

available within this paper, the results and gap analysis are quite enlightening. Due to the depth of the study, many<br />

conclusions could be put forth, but for the purpose of this paper, the most relevant trends found are offered.<br />

With a small sample of subjects more appropriate than a large, those included within the study are all founders<br />

(owning over 100% of shares) of family led SME’s located in the southern counties of England and operating within<br />

the retail environment of which intergenerational succession is a current or future possibility.<br />

Yin & Heald (1975, p.372) suggests “the case study method is mainly concerned with the analysis of qualitative<br />

evidence in a reliable manner” and it is this approach that this research is based, the six case participants listed<br />

within Figure 1 below.<br />

3. Empirical Findings and Discussion<br />

Figure 1: Case Participants<br />

Conclusions drawn from the case studies suggest that each of the founders choosing succession as their exit strategy<br />

expects the transfer to be successful, and on a general basis, whilst perceiving long term semi-formal planning,<br />

475


implementation and management to be necessary, they have some very interesting views on what the process should<br />

consist of, especially relating to the successors training needs.<br />

Out of the six founders involved in the study, five of them have chosen intergeneration succession as their likely<br />

exit strategy; reasons ranging from that of the founder of Company A who states that it is not something he has<br />

chosen, but “one of those things that happens”, to the thoughts of the founder of Company F who being able to retire<br />

while still taking an income and enabling the children to have a career (Company F). The founder of company D<br />

though has chosen to sell his organisation, who questionings “why anyone would want to bring their son or daughter<br />

into their business”.<br />

Whilst some founders may be very content and look forward to the comfort and warmth of their next chapter,<br />

literature shows that some feel great anxiety in the face of retirement (Dyer, 1986; Handler, 1990; Lansberg 1999).<br />

All but one of the founders in the study stated that they are looking forward to retirement. Other than the founder of<br />

Company D who states that he will miss “nothing at all”, the rest expect to miss aspects of running the organisation;<br />

the founders of company E and F state that the day to day contact with customers, suppliers and staff as some of the<br />

main elements. This is an interesting link in variables considering company D is the only one stating that “the<br />

business being sold” as the likely exit strategy; those choosing intergenerational succession all intend to take up<br />

figure head roles and so expect to remain involved within their organisation.<br />

Whilst the Founder of company A has the strongest desire to continue, stating “even if I take a semi-retirement<br />

backseat role, I will remain involved” and in reply to being asked for what reason he will leave his post, his reply<br />

was “I will die” and owner of company F commenting that “if there was a problem and a figure head was needed, I<br />

could step in and sort it out”, this potential evidence of founders themselves being potential obstacles to a<br />

successful succession, by not letting go of the organisations reins.<br />

Motivation and enthusiasm top the founders list of necessary attributes, the author agrees that this is a crucial<br />

factor, but with a balance of competence and business acumen. Leadership and credibility come in a close second<br />

followed by organisational knowledge and management and strategic ability. Morris et al (1997) states that<br />

education is linked to positive succession, the founders don’t portray it in high esteem, company F’s stating that<br />

“common sense is required rather than education”.<br />

It is the successors that the founders see as the ones who must hold such attributes in order to successfully run<br />

the organisation, and while they state motivation as the key, a question hangs whether the successor’s business<br />

acumen and management ability is looked at from a parental perspective, or truly measured though a professions<br />

eye. While the majority seemed very confident in their siblings current and prospective future attribute set and<br />

abilities, the founder of company A states that his successor, other than motivation “needs to work on all of them,<br />

she basically does not have most of them”, of which she openly concurs saying “sometimes I doubt my own<br />

abilities…..it will take a lot of work”. Due to the limited pool and lack of potential options it is difficult to quantify<br />

to what extent founders initially question their successors ability or suitability, especially those of whom the current<br />

literature states make the decision of intergenerational succession for emotional and personal reasons. How well the<br />

successor is prepared and primed for such an important role in such circumstances is then down to the succession<br />

plan itself, and the development that takes place during the process.<br />

Cabrera-Suarez et al (2001) suggests that the closer the relationship between the founder and successor and their<br />

thoughts on the organisations objectives, performance, values and ethics, the smoother the succession may be.<br />

Drawing on the case studies the relationship between the founder and successor appears to be of particular<br />

importance. The founder of company C sees “personality crashes” as a potential problem, the founder of company<br />

A, sees his relationship with his successor as a strong father/daughter bond, but while the finance director and<br />

daughter agree; she recognises the problem of not being able to separate personal and professional life stating that<br />

they “cross over most of the time”.<br />

The founder of company D states “if you let one of your children take over your business, you will never accept<br />

that they can do it better than you, and you will never let them”, alongside the ability and attribute set the successor<br />

has or has built over the succession process, they may also need a strong character in the face of too much founder<br />

involvement.<br />

476


It is natural that family organisations have a much smaller pool of talent of which to draw, especially when it<br />

comes to intergenerational succession, and so the development of the successor is of great consequence during the<br />

process. A potentially successful successor should be motivated, have leadership ability alongside credibility,<br />

intrinsic organisational knowledge, good management ability, experience, and an education (Howorth and Ali, 2001;<br />

Morris et al. 1997; Christman et al. 1988; Handler, 1990), and this is where the gaps in theory and founder<br />

perception become more visible. As can be seen in figure 2; while motivation, leadership ability and credibility hold<br />

top positioning within the founders attribute list with organisational knowledge and management ability trailing,<br />

leaving education and real management experience behind, and so the risk of handing the business over to a<br />

unprepared or under qualified successor is increased.<br />

The author concurs with Morris et al. (1997) who suggest that education positively correlates to a smooth<br />

transition and post succession performance and sets a foundation for areas including business strategy and<br />

marketing. Alongside this Internal training (Neubauer & Lank, 1998), knowledge transfer, and a growing<br />

involvement within the organisation (Cabrera-Suarez et al. 2001) alongside outside work experience (Barach &<br />

Gantisky, 1995) are all important, the founders again, have a different perspective as can be seen in Figure 3.<br />

The founder of Company D stand apart from the other founding members, by questioning the reasoning behind<br />

intergenerational succession and states outside work experience is essential as a learning platform, suggesting that<br />

“working for a few good people and a few bad people” brings with it a greater understanding and knowledge base<br />

and that young successors “need to stand on their own two feet”. The others founders expect that an internal<br />

training scheme and gaining knowledge from the founders themselves, of which the potential successors in all but<br />

one of the organization have already begun, will enable the successor to gain everything they need. The author<br />

suggests that this is likely to be a natural consequence of the founders having children, and the children’s lifelong<br />

involvement within the company. The author also puts forward that a training program originating entirely from<br />

within the organisation is not enough to prepare the successor, especially when they have so far taken a backseat<br />

role in the organisation, with limited functionality, this supporting by the work of Longenecker & Schoen (1978).<br />

Figure 2: Successor Attribute Gap<br />

477


Figure 3: Training Method Gap<br />

To gain the credibility and respect of the staff that are considered by the author to be factors that may be<br />

necessary in order to lead effectively, a broader learning platform could further enable the successor’s development<br />

and potentially build a greater and more effective skill set. None of the founders taking part in the research though<br />

saw formal education as a valuable tool; it could be assumed that they would prefer their children to learn through<br />

the school of hard knocks that they themselves may have worked through, or as Ibrahim et al (2004) suggests, view<br />

education as more of a cost burden than an investment in future growth.<br />

The case studies suggest that founders much prefer internal training mechanisms, whilst some research,<br />

(Howorth and Ali, 2001; Morris et al. 1997; Christman et al. 1988; Handler, 1990) takes into account the value of<br />

outside contributors including work experience and education.<br />

With careful implementation and management of the succession plan, handover brings in the new chapter for<br />

the organisation, whilst some literature suggests that it should involve a process of mutual role adjustment (Handler<br />

1990, 1992), the incumbents role gradually diminishes whilst the successors gradually increases. The analysis of the<br />

case studies shows that the founders perceive the management of the handover to be crucial, and a gradual<br />

responsibility changeover to facilitate its completion. While the founder of Company A does not ever see himself<br />

fully stepping down, the rest intend to hold figure head roles; ranging from the founder of Company B who states he<br />

will “always be available” to the thoughts of the founder of Company F who states “I think I will remain with the<br />

business, less on the responsibility and decision making role, and more as sticking my head in the door, making sure<br />

the suppliers and customers know I’m around, but only on a superficial level”. This founder goes on to say that<br />

“you almost have to let go through gritted teeth, but knowing its overall the best thing to do”; handover clearly<br />

serves as potentially critical period, some potential scenarios of are explored later in the paper.<br />

While most the founders seem comfortable with their successors bringing in new impetus and young blood into<br />

the organisation and have intentions of being available, acting as figure heads to their offspring, there is a possibility<br />

they will be surprised at the effect of any of the three previously discussed succession types; conservative, wavering<br />

and radical (Miller, Steier and Le Breton-Miller 2003). If they see nothing happening, poor strategy or radical<br />

change, the research results suggest that they may find themselves wanting to take back some control; such a<br />

situation could become very complicated and frustrating. The likely successor in company A, states “I think initially<br />

I will sit back and see how things go, and then if something comes to light that might need changing, I will<br />

implement those changes”, but while the financial director questions her true motivation and ability to do so. The<br />

variables involves in a succession process are vast; the founder, successor, organisation, process plan and handover;<br />

it is becoming increasingly evident why it can be such a dangerous time for any family company.<br />

Clear and concise communication between the founders themselves and their successors is viewed as crucial by<br />

the literature and founders; a healthy respect enabling the transfer of tacit knowledge and platform to train.<br />

478


Communication of the succession plan itself can also prove vital throughout the company board to ensure<br />

implementation; interestingly the founders of Company’s B and F state that keeping the staff within the company in<br />

the loop, communicating to them the succession plan so they are aware of the future company structure should help<br />

to ensure their involvement and motivation to help things transfer smoothly.<br />

The founders agree that the succession process needs to be formally planed and managed, this supported by<br />

researchers including Handler, (1990). Like any business strategy, true implementation is key to ensure it is achieves<br />

its objectives; it may need amending from time to time, changing when unforeseen circumstances raise their head,<br />

but the management of such is essential, the Finance Director of company A states in answer to how important it<br />

management of the process is; “Most definitely, especially in a family owned business, it’s even more important<br />

than in large corporate organisations”.<br />

The results have brought forward a clear message of what is perceived to be necessary to help bring a successful<br />

transfer and of the needed attributes and preparation of the potential successor. A strong respectful relationship<br />

between a founder that is willing to let go and successor with the motivation and ability to take over, a shared long<br />

term, multi stage and well managed formal succession plan in which to prepare the organisation and develop the<br />

successors abilities and attributes through the use of internal and external facilitators, with a handover through<br />

mutual adjustment.<br />

The founders proved to be very interesting individuals, passionate about their organisations and very confident<br />

about the future, whose thoughts on the many elements of the succession process were aligned to current thinking.<br />

The clear gaps though identified in Figures 4 and 5 signify how family organisation do prefer to do things from<br />

within; potentially losing the vital value of outside work experience and education, two sources of development that<br />

the literature suggest positively correlates to a smooth transition and post succession performance (Morris et al.<br />

1997; & Barach & Gantisky, 1995).<br />

The literature talks of how crucial the choice of successor is, of how family businesses are idiosyncratic to a<br />

high degree (Nooteboom, 1993), how personal and emotional factors can determine who the next leader will be<br />

(Lee, Lim & Lim, 2003) and are more likely to appoint their potentially less competent offspring (Pollak 1985). It<br />

may be true that some founders make the decision of intergeneration succession due personal and emotional needs<br />

instead of strategic goals for the future performance of the company, but some do it as they believe it is the right<br />

choice, or because it may be the only one available; family organisations usually have a smaller pool of talent on<br />

which to draw, (Lansberg, 1999; Miller, Steier, & Le Breton-Miller, 2003) and so successor options can prove to be<br />

slim, potentially relying on a unsuitable candidate.<br />

4. Conclusion<br />

A contextual5 stage model has been built from the results of the analysis; for the use in family run SME’s who have<br />

come to a stage where the founder is reviewing potential exit strategies, and intergenerational succession is chosen.<br />

Stage 1: Exit Strategy Choice<br />

Stage 2: Successor Choice – Review of their (a) motivation (b) ability & (c) attributes: Successor Level vs.<br />

Required Level<br />

Stage 3: Review of relationship between founder and successor – review of their perception on shared vision &<br />

needs of the succession process<br />

Stage 4: Successor Development – Design & implementation to fill ability and attribute gaps through Internal (early<br />

exposure, knowledge transfer – founder led & internal training program) & External methods (external work<br />

experience & formal education)<br />

Stage 5: Role Adjustment & Final Handover – Scenario planning and acceptance<br />

Firstly, the internally based mechanisms; early exposure to the business can help build an important foundation<br />

of organisational knowledge and understanding (Cabrera-Suarez et al., 2001), in Company A the successor has had<br />

such, but in a role has proved to be sporadic and limited and due to this does not hold much credibility or respect<br />

from the staff. As the leader of the organisation, the knowledge transfer between incumbent and successor is<br />

479


crucial; company strategy, marketing, financial results and expectation, the list is endless, but the potential to gain a<br />

rich first hand grasp of the needs, mechanisms, culture and people within the organisation may prove to be<br />

invaluable. This is a stage where the relationship between the two is vital and mutual adjustment process (Handler<br />

1992) can begin; a point in time where the founder gives, and the successor should respectfully take. While this<br />

communication stream may never truly diminish, it is as important that the founder from this point should be coming<br />

to terms with the idea of letting go of the organisation, as well as the successor getting ready to take it; the more<br />

effective this is, the more probable a smooth final handover period.<br />

Each of the participating organisations saw the value in a well-constructed internal training program, this is<br />

supported by the finding of Neubauer & Lank (1998); an opportunity to build an deep understanding of each aspect<br />

and department, to bring clarity to how the business works and how each part is linked, to build relationships,<br />

respect and potential credibility through training, working and performing alongside staff at each level of the<br />

company. This is a stage where clear objectives and timelines need to be set, a time where communication to the<br />

staff involved must be clear and given in plenty of time; depending on the size and complicity of the organisation<br />

and market it operates in. For example the within Company A the successor would potentially need to spend a<br />

minimum of six weeks visiting the branches, two weeks each in the Operations, H.R. and Accounts Departments,<br />

with a shorter introduction to Marketing and I.T. in order to gage a rich understanding into the intricacies of each.<br />

This would bring an opportunity to work and spend valuable time with the customer/public facing staff, but also the<br />

Finance Director and Operations Manager who alongside bring a greater depth of knowledge. This would paint a<br />

realistic picture of the company’s economic position.<br />

Externally based successor development has proved to be the largest chasm in perception between literature and<br />

founder perception in the investigated cases. Outside work experience which Barach & Ganstisy (1995) state can<br />

bring with it some very valuable lessons and even more so formal education which Morris et al. (1997) suggests<br />

positively correlated with a smooth transition and post succession performance, are the two aspects that founders do<br />

not believe to be very necessary. Whether this is down to limited personal exposure or their high confidence in<br />

internal based training is open to question. However the conclusions from the case studies suggest that each family<br />

organisation should consider the benefits of each, to enable successor development in order to increase their<br />

knowledge base and understanding of the economy, and application of business concept and techniques.<br />

Ibrahaim (2004) states this may be due to the perception of the firms that external training is more of a cost<br />

burden than an investment in their future growth; however the author suggests that the founders perceive on the job<br />

training and personal knowledge transfer as a more valuable platform of development than external alternatives, and<br />

so it is not so much of a cost burden than a less effective option.<br />

Stage 5 of the overall framework related to handover; the ideal scenario (Figure 4: Diagram 1) portrayed by the<br />

literature (Handler 1990, 1992) and primary research (Founders B, C, D, E, F) is for the succession process to be<br />

completed as planned; for the responsibility of the incumbent and successor to mutually adjust over a period of time.<br />

If the organisation in question has a board of directors, such as Company, they can help facilitate the final crossover;<br />

supporting the successor in their new role, whilst bringing confidence to the incumbent that all will be ok and<br />

resisting their potential want to continue, helping transfer into their planned a figure head role. The type of<br />

succession will at this point start to develop, the successor within Company A may prove to be what Miller, Steier<br />

and Le Breton-Miller’s (2003) summarise as a conservative, stating that she will “sit back and see how things go”.<br />

480


Figure 4: Handover Scenarios<br />

Poor choice of successor can clearly be an obstacle to success, as can lack of confidence or insufficient<br />

development on the part of the successor (McGiven, 1978; Schein, 1985). However, the founder’s willingness to<br />

step down is tested at this critical stage (Sharma et al. 2001). If any of the above come into reality (Figure 4:<br />

Diagram 2), a period of frustration can be encountered; a time where the important relationship (Cabrera-Suarez et<br />

al. 2001) between the two can falter, and a period of organisational uncertainty for many staff can follow. Any such<br />

obstacle needs to be managed; an organisation such as Company A whose successor expects “a difficult transition”<br />

and the incumbent does not see himself fully retiring, the board of directors will have to play a pivotal management<br />

role.<br />

Handler (1990) has said that the realisation of poor or failing health can be a key stimulus for the incumbent to<br />

start letting go of the business; Company A’s founder has been confirmed to have an incurable illness, so the future<br />

is not as clear as one would like, in the unfortunate occurrence of his sudden death, the company should have a<br />

contingency to help ensure its long term future (Figure 4: Diagram 3). In such a case the board of directors can take<br />

a central role, the successors continued training and development is crucial to their ability to lead and manage the<br />

organisation in the future.<br />

Successful intergenerational succession process within a family owned and run business needs to be carefully<br />

planned for, implemented and managed. The relationship between founder and chosen successor, the development<br />

of the successor’s abilities, their motivation, and attributes including the exploitation and analysis within the review<br />

of current thinking concludes that a leadership, organisational knowledge, management ability and credibility within<br />

the organisation are deemed to be important. Training mechanisms including internal training programs, formal<br />

education and outside experience all being reported to positively correlate with a smooth transition and postsuccession<br />

performance, and with smooth final handover viewed as vital, it seems that the process is one that needs<br />

to be thorough and complete; one that if any parts are missing or facilitated to a low level could potentially bring<br />

failure.<br />

481


5. References:<br />

Barach, J.A. and Gantisky, J.B. (1995). Successful succession in family business. Family Business Review, 8(2),<br />

131-155.<br />

Berry, M. (1998) Strategic planning in small, high-tech companies, Long Range Planning, Vol. 31 pp. 455-66.<br />

Birley, S. (1986). Succession in the family firm: the inheritors view. Journal of Small Business Management, 24(3):<br />

36-43.<br />

Cabrera-Suarez, K., De Saa-Parez, P., and Garcia-Almeida, D. (2001). The succession process from resource and<br />

knowledge-based view of the family firm. Family Business Review, 14(1), 37-47.<br />

Christman , J.J., Chua, J.H., Sharma, P. (1998). Important attributes of successors in family businesses: an<br />

exploration study. Family Business Review 11 (1), 19-34.<br />

Dyck, B., Mauws, M., Starke, F.A., and Mischke, G.A. (2002). Passing the baton: The importance of sequence,<br />

timing, technique and communication in executive succession. Journal of Business Venturing 17: 143-162.<br />

Dyer, W.G. Jr. (1986). Cultural change in family firms: Anticipating and managing business s and family<br />

transitions. San Fransisco: Jossey-Bass.<br />

Handler, W.C. (1990). Succession in family firms, a mutual role adjustment between entrepreneur and nextgeneration<br />

family members. Entrepreneurship Theory and Practice, 15(1), 37-51.<br />

Handler, W.C. (1992). The succession experience of the next generation. Family Business Review, 5 (3), 282 – 372.<br />

Howorth, C., Ali, Z.A. (2001). Family business succession in Portugal. Family Business Review. 14 (3), 231-244.<br />

Ibrahim, A.B., Soufani, K., Poutziouris, P. & Lam, J. (2004). Qualities of an effective successor: the role of<br />

education and training. Education & Training Vol. 26. pp.474-480.<br />

Kets de Vries, M. (2001). The Leadership Mystique, Prentice-Hall, London.<br />

Kirzner, I. (1985). Discovery and the Capitalist Process. Chicago, IL: University of Chicago Press.<br />

Lansberg, |. (1999). Succeeding Generations: Realizing the Dream of Families. Harvard Business School Press,<br />

Boston.<br />

Lee, K.S., Lim, G.H. & Lim, W.S. (2003). Family Business Succession: Appropriation Risk and Choice of<br />

Successor. Academy of Management Review. Vol. 28, No4, 657-666.<br />

Longenecker, J.G., & Schoen, J.E. (1978). Management Succession in the family business. Journal of Small<br />

Business Management. July, 1-5.<br />

McCarthy, B. (2001). Strategy is personality-driven, strategy is crisis driven: insights from entrepreneurial firms.<br />

Management Decision 41/4, pp. 327-339<br />

McGiven, C. (1978). The dynamics of management succession. Management Decision, 16(1), 32.<br />

Miller, D., Steier, L., & Le Breton–Miller, I. (2003). Lost in Time: Intergenerational succession, change and failure<br />

in family business. Journal of Business Venturing, 18(4), 513-531.<br />

Miller, D., Steier, L., & Le Breton –Miller, I. (2004). Towards an Integrative Model of Effective FOB Succession.<br />

Entrepreneurship Theory and Practice.<br />

Morris, M.H., Williams, R.O., Allen, J.A., and Avila, R.A. (1997). Correlates of success in family business<br />

transitions. Journal of Business Venturing, 12(5), 385-401.<br />

Neubauer, F., & Lank, A.G. (1998). The family business. London: Macmillan.<br />

Nooteboom, B. (1993). An analysis of specifically in transaction cost. Organisational Studies, 14:443-451.<br />

Pollak, R.A. (1985). A transaction cost approach to families and households. Journal of Economic Literature, 23:<br />

581-608.<br />

Potts, T.L., Schoen, J.E., Engel Loeb,, M., & Hulme, F.S. (2001). Effective retirement for family business ownermanagers:<br />

Perspectives of financial planners-Part 2. Journal of Financial Planning, 14(7), 86-96.<br />

Sharma, P., Christman, J.J., Pablo, A.L., & Chua, J.H. (2001). Determinants of initial satisfaction with the<br />

succession process in family firms: A conceptual model. Entrepreneurship Theory & Practice, 25(3), 17-35.<br />

Yin, R.K. & Heald, K.A. (1975). Using the Case Survey Method to Analyze Policy Studies (Electronic Version)<br />

Administrative Science Quarterly, Sept 1975, Vol 20. pp 371-381.<br />

482


CREDIT RATING MODEL FOR SMALL AND MEDIUM ENTERPRISES WITH ORDERED LOGIT<br />

REGRESSION: A CASE OF TURKEY<br />

Res. Asst. Özlen ERKAL<br />

Istanbul University - Avcılar Campus<br />

Industrial Engineering Department<br />

34320 Avcılar-Istanbul / TURKEY<br />

ozlenerkal@istanbul.edu.tr<br />

Res. Asst. Tugcen HATİPOĞLU<br />

Sakarya University - Esentepe Campus<br />

Institute of Science and Technology<br />

54187 Serdivan-Sakarya/TURKEY<br />

thatipoglu@sakarya.edu.tr<br />

Abstract. Under the pressure of competition, the market is shared by a great deal of companies which differ in terms of their<br />

productivity levels, adaptation abilities, trends or new technologies, strength levels through financial crisis and roles in enhancing the<br />

employment. With their products and services Small and Medium Enterprises (SMEs) can compete with large-scale companies. On<br />

the other hand, they can also support and complete the large-scale companies by constituting subsidiary industries. SMEs have an<br />

important role in developing national economies. However, financial crisis affects the SMEs financially as well as the other<br />

companies. Especially in crisis period, enterprises dealing with financial problems are expected to demand credits; while the banks are<br />

more attentive to provide this demand because of non-performing credits. In order to determine the enterprises to provide credits, the<br />

banks need some accurate statistical analysis. In this study for the year 2010, which is the liquidity crisis period, the financial distress<br />

possibilities of 145 SMEs out of 250 enterprises that do credit application to a bank are evaluated with the help of ordered logit<br />

regression by using relevant financial ratios. The analysis of data have pointed out that 7 of 19 financial ratios are regarded to be more<br />

important than the others as being the main indicators for the banks in the usage of credit approval or denial decisions. In the study,<br />

risk levels are determined with Altman Revised Z Score model and an early warning system is proposed.<br />

Key Words: Financial distress, Altman Z Score, Ordered Logit Regression.<br />

1. Introduction<br />

Gaining sustainable economic growth, enhancing the employment degree and raising the standard of living in<br />

countries would definitely contribute to the development of world economy. However, a sharp increase in financial<br />

competition is observed in the intensified structure of financial environment which is triggered by globalization. The<br />

market is shared by various financial or productive units with their own characteristic properties such as their<br />

productivity levels, adaptation abilities, trends or new technologies, strength levels through financial crisis and roles<br />

in enhancing the employment. Small and Medium Enterprises (SMEs) have an important role in both local and<br />

global market environments while competing with large-scale companies. They also should exist in even a simple<br />

market environment in order to support the large-scale companies by constituting them subsidiary industries. In<br />

most economies, relatively small scaled enterprises are greater in number and SMEs are generally responsible for<br />

driving innovation and competition. In such a multi-unit, dynamic and competitive market structure, maintaining the<br />

financial stability can not be generally achieved. The credit-related businesses become progressively more diverse<br />

and complex and they also grow rapidly, straining the limits of traditional methods for controlling and managing<br />

credit risk.(Treacy & Carey ,2000) Moreover, the financial crisis may pose some additional persistent risks about the<br />

economic health and may effect the overall consistency of the financial environment for an uncertain period of time.<br />

The financial crash started in housing market in 2007 caused financial instability and spread in waves in years.<br />

2010 was the year that the liquidity crisis exists intensively. Not only the real estate and bubble credit crisis but also<br />

the processes performed based on them have triggered the conversion of this instability into a serious crisis<br />

environment. (Ozturk & Govdere, 2010) Therefore during 2010, it would not be hard to say that a great deal of<br />

companies needed financial help. The strength levels of the enterprises through the financial crisis differ through<br />

their properties and enterprises usually confined to demand credits from the banks to get out of any kind of financial<br />

problems.<br />

SMEs’ financial decisions have crucial effects especially in the crisis period. For the least damage, some<br />

supportive factors must be provided by banks such as granting SMEs credits when they are in financial need. On the<br />

other side, in order to provide a better insight for the banks, the necessity of an accurate evaluation system for credit<br />

risk scoring is inevitable within financial risk researches. The credit process in banks is to make decisions about the<br />

credit approval or denial as a result of various financial assesments. The process policies would differ depending on<br />

the organizational and financial structures of the potential borrowers. Therefore, in order to determine which<br />

483


enterprises to assign credits, the banks need some certain and accurate statistical methods to analyze relevant<br />

financial ratios. This issue is crucial in financial evaluation. Whether the banks accept the credit application and the<br />

debt haven’t been repaid after the credit risk evaluation process; it would cause a huge economic loss. In other case,<br />

whether some SMEs that have capabilities of repaying the credit have been rejected, it leads to block the way of<br />

recovering the economy.<br />

2. CREDIT RATING SYSTEMS<br />

Many banks have introduced more structured or formal systems for approving loans, portfolio monitoring and<br />

management reporting, analysis of the adequacy of loan loss reserves and loan pricing analysis. (Treacy & Carey<br />

,2000) Rating judgements are known as the financial evaluations of potential borrowers’ ability to repay the debt.<br />

The evaluation is fundamentally based on the historical borrowing and repayment data of the borrower, as well as<br />

the availability of its assets and the extent of liabilities. Credit ratings have also been used for adjusting insurance<br />

premiums, determining employment eligibility and establishing the amount of a utility or leasing deposit.<br />

(Falavigna, 2010)<br />

2.1 Altman Z-Score Model<br />

The most famous credit risk model is Altman's Z-score model which employs accounting-ratio-based information in<br />

determining and quantifying how the probability of default and a set of financial ratios are related. The Altman's Zscore<br />

model relies mostly on information obtained from companies' financial statements (Leon Li & Miu, 2010) The<br />

approach is a classification method which projects high-dimensional data onto a line and performs classification in<br />

this one-dimensional space. The projection maximizes the distance between the means of the two classes while<br />

minimizing the variance within each class. (Thomas Ng& Wong& Zhang, 2011) The Z-Score model is a linear<br />

analysis in that five measures are objectively weighted and summed up to arrive at an overall score that then<br />

becomes the basis for classification of firms into one of the priori groupings (distressed and nondistressed). (Altman,<br />

2000)<br />

However; Altman’s Revised Z score equation is adopted in this study which would let 3 different zones to<br />

analyze instead of 2. The revised Z score model is presented as :<br />

Z' = 0.717T1 + 0.847T2 + 3.107T3 + 0.420T4 + 0.998T5<br />

Where :<br />

T1 = (Current Assets-Current Liabilities) / Total Assets,<br />

T2 = Retained Earnings / Total Assets,<br />

T3 = Earnings before Interest and Taxes / Total Assets,<br />

T4 = Book Value of Equity / Total Liabilities,<br />

T5 = Sales/ Total Assets<br />

Zones of Discrimination:<br />

� Z' > 2.9 -“Safe” Zone<br />

� 1.23 < Z' < 2. 9 -“Grey” Zone<br />

� Z' < 1.23 -“Distress” Zone<br />

Presumably, Altman’s Z-score is negatively correlated with the firm’s current and prospective financial<br />

situation, and the firms with low Z-scores have relatively high demand for loans because without additional<br />

borrowing, these firms may have to go out of business. However, profit-maximizing banks in a well-functioning<br />

market economy typically would not be willing to advance additional loans to firms that have low Z-scores and,<br />

therefore, are relatively likely to default.(Alexeev& Kim, 2008)<br />

3. ORDERED (ORDINAL) LOGIT (LOGISTIC) REGRESSION<br />

Ordinal logit models are extensively used in the literature on bond ratings, organizational ranking, and occupational<br />

attainment, where the dependent variable has a natural ranking. In the case of a binary response variable, the<br />

assumptions of linear regression are not valid. Logistic (Logit) regression is a method for fitting a regression curve,<br />

484


y = f(x), when y consists of proportions, probabilities or binary coded (0-1 or failure - success) data. It is a nonlinear<br />

regression model that forces the output (the predicted values) to be either 0 or 1 and is used when the dependent<br />

variable is categorical. However, as with the other techniques, independent variables may be either continuous or<br />

categorical in logit regression.<br />

Although the outcome is discrete, the multinomial logit or probit models would fail to account for the ordinal<br />

nature of the dependent variable. If the situation being modeled is unordered, an ordered model can lead to serious<br />

biases in the estimation of the probabilities. On the other hand, if the type of event under study is ordered, an<br />

ordered model loses efficiency rather than consistency ( Nam , 1997)<br />

While examining access to credit markets, the underlying latent variable is the ability to observe and/or<br />

demonstrate creditworthiness. Thus, both the borrower’s signaling efforts and the lender’s screening efforts are<br />

involved. The cutpoints in the ordered logit models separate the different categories according to their ability to<br />

demonstrate creditworthiness and the lender’s ability to evaluate this creditworthiness.(Joshi, 2005)<br />

When an ordinal regression is fit, it is assumed that the relationships between the independent variables and the<br />

logits are the same for all the logits. It means that the results are a set of parallel lines or planes. This assumption can<br />

be checked by allowing the coefficients to vary, estimate, and then test whether or not they are all equal. (ASPC<br />

v.13, 2011)<br />

In order to analyze, it is often of interest to classify the units into various segments. For the banks, classifying<br />

potential borrowers to their risk profile are usually defined from the outset as discrete categories with a certain<br />

ordering. They form a finite and ordered partition into non-overlapping subsets. (Fok&Franses, 2002)<br />

4. Application<br />

In this study, the financial data related to 145 SMEs is used as sample that is determined as being more accurate data<br />

within 250 SMEs that have done credit application to a public bank in year 2010, in Turkey. In order to analyze the<br />

financial status of these enterprises, 19 characteristic financial ratios are selected at the outset.<br />

X1 Liquidity ratio<br />

X2 Current ratio<br />

X3 Cash ratio<br />

X4 Equity Multiplier (EM)<br />

X5 Short term liabilities/Total assets<br />

X6 Short term liabilities/Total liabilities<br />

X7 Long term liabilities/ Total assets<br />

X8 Long term liabilities/ Total liabilities<br />

X9 Fixed assets/Equity<br />

X10 Fixed assets/Continuous Equity<br />

X11 Net Working Capital Turnover<br />

X12 Current Assets Turnover<br />

X13 Assets turnover (annual)<br />

X14 Net Profit Margin (annual)<br />

X15 Operational Profit Margin (OLPM)<br />

(annual)<br />

X16 Profit Capital Ratio – (annual)<br />

X17 Return on Assets - annual<br />

X18 Ebitda/Net sales<br />

X19 Earnings Per Share (EPS)<br />

Table 1: Selected financial ratios<br />

485


Stepwise ordered logit model is preferred to find solution for possible multicollinearity problems and to<br />

eliminate some of the correlated ratios from the analysis. At the first step, Altman Revised Z-score is calculated and<br />

the financial status of firms are categorized in 3 groups as Safe, Grey and Distress. Safe enterprises are coded by<br />

“2”, Grey enterprises are coded by “1” and Distress enterprises are coded by “0” according to their safety levels in<br />

credit approval. The data is evaluated via SPSS Programme - Version 17 th .<br />

N<br />

Marginal<br />

Percentage<br />

Y .00 86 59.3%<br />

1.00 18 12.4%<br />

2.00 41 28.3%<br />

Valid 145 100.0%<br />

Missing 0<br />

Total 145<br />

Table 2: Case Processing Summary (Zones and Percentages)<br />

The main aim is to determine the correct financial ratios for the banks to be used for the credit approval or<br />

denial decisions when the financial status of the enterprise is addressed as an ordered logit variable and to propose<br />

these ratios as the main indicators for the prementioned financial decisions. Testing the parallel lines assumption of<br />

the model is crucial for the accuracy of gathered information.<br />

Model<br />

-2 Log<br />

Likelihood Chi-Square df Sig.<br />

Null Hypothesis 84.094<br />

General 40.671 43.422 7 .100<br />

*Link Function : logit<br />

Table 3: Test of Parallel Lines* results<br />

In connection with the inequation P>0.05, H1 is accepted. That means the assumption is ensured and the accuracy of<br />

the results to be gained from the model is statistically proved. Then the model’s goodness of fit is evaluated.<br />

Model<br />

-2 Log<br />

Likelihood Chi-Square df Sig.<br />

Intercept Only 268.539<br />

Final 84.094 184.445 7 .000<br />

Link function: Logit.<br />

Table 4: Model Fitting Information<br />

Chi-Square df Sig.<br />

Pearson 94.620 281 1.000<br />

Deviance 84.094 281 1.000<br />

Link function: Logit.<br />

Table 5: Goodness-of-Fit results<br />

486


Cox and Snell .870<br />

Nagelkerke .854<br />

McFadden .887<br />

Link function: Logit.<br />

Table 6: Pseudo R-Square Results<br />

The inequation p0.05 in Table 5<br />

indicates the success of the link function logit to hold the goodness of fit.<br />

Table 6 which indicates the ability of providing explanation of the independent variables shows high ability of<br />

explanation via the percentages of Cox and Snell R2 = %87, Nagelkerke R2 =%85 and McFadden R2=%88. The<br />

output of the model is shown at below.<br />

Estimate Std. Error Wald df Sig.<br />

95% Confidence<br />

Interval<br />

Lower Upper<br />

Bound Bound<br />

Threshold [Y = .00] 26.529 5.239 25.645 1 .000 16.262 36.797<br />

[Y = 1.00] 29.142 5.512 27.949 1 .000 18.338 39.946<br />

Location X1 -1.015 .196 26.722 1 .000 -.630 1.400<br />

X5 10.808 3.329 10.542 1 .001 4.284 17.331<br />

X7 72.291 13.237 29.827 1 .000 46.347 98.234<br />

X11 -.096 .045 4.483 1 .034 -.184 -.007<br />

X13 -15.000 2.748 29.805 1 .000 -9.615 20.385<br />

X16 -.243 .059 16.770 1 .000 -.127 .359<br />

X17 -.204 .082 6.183 1 .013 -.365 -.043<br />

Link function: Logit.<br />

Table 7: Parameter Estimates Results<br />

Because of Stepwise model is used, the output model is gained from the variables that all show significance.<br />

In the study, significant X1 variable refers to the liquidity ratio. This ratio indicates the degree of safety for the<br />

current assets in terms of liquidity level of the enterprise. As being an indicator for the sustainability of the activities<br />

in current financial status and unexpected market conditions; the liquidity ratio is especially important for the<br />

lenders in the sense of interpreting the borrowers’ liquidity level and adequacy for paying the debts back related to<br />

the maturing liabilities.<br />

Variable X5 is the ratio of short term liabilities/total assets. This ratio can also be used as a risk indicator for the<br />

enterprises that indicates how much of the total assets is get funded by short term liabilities and how much of short<br />

term liabilities is used within total assets. The highness of this ratio can be regarded as an indicator of the increase in<br />

risk level and it also provides the probability of the increase in dividends for per shareholder by the leverage effect.<br />

The ratio of short term liabilities/total assets can be used with Total Assets/ Active Assets ratio. If the ratio related to<br />

the short term liabilities is low when the Total Assets/ Active Assets ratio is high; then a relatively less<br />

problematical financial status can be observed.<br />

Necessity for a lower percentage value than 33% for Short Term Liabilities/Active Assets ratio is adopted as<br />

being a common criteria by the western financial agencies. However, in the countries that have problems about<br />

resource procurement or have inflationary tendency this ratio is expected to be approximately 50%. Enterprises<br />

should set a linkage between the maturity and the time the assets are used for.<br />

Gaining liabilities with a different cast of mind may leave the enterprises in a difficult situation. Thus, the<br />

enterprises that invest to current assets with their short term liabilities are noteworthy in the financial evaluation.<br />

487


X7 variable is a ratio of long term liabilities/ total assets. This ratio shows how much long term liabilities in<br />

total assets is and it gives information about the company’s power of providing long term funds. When this ratio is<br />

high, the company should have its assets with long term resources simply and if it becomes greater, it would be an<br />

indicator of having difficulties to pay amortization of the debt in recession period. In general, companies which<br />

started doing high investments also have this financial ratio high. It can be interpreted as when this ratio does not<br />

decrease after the investments, that investment’s added value is not satisfactory. In sectoral analysis, this ratio<br />

requires to be paid attention. Besides, for a better interpretation, analyzing the short term liabilities should have been<br />

caused drawing more reliable decisions.<br />

Net working capital turnover which is shown by X11 variable is a ratio measure being how successful at sales<br />

volume with minimum working capital (net working capital) for the sustaining activities. When this ratio is high, it<br />

indicates that utilizing from net working capital efficiency and suitable, inventory and accounts receivables turnover<br />

are high or inventories and receivables are requires low working capital, Short Term Financial Expenses at<br />

organization is high and current ratio is low. On the other hand, when this ratio is low that means the company has a<br />

greater working capital, inventory and accounts receivables turnover are low and the company’s cash value more<br />

than needed. With increasing sales, net working capital is prone to increase. The main reason of this case is the<br />

raising trade receivables with increasing demands.<br />

X13, Assets turnover, is a ratio to determine the success of the company's sales revenue to it’s total assets. The<br />

ratio financially shows whether the assets of the company grow essentially and whether or not the company invests<br />

to the assets greater than planned before. This ratio should be regarded as a measurement of using technology or<br />

consuming the assets within the company. If the fixed assets in total assets become relatively high, assets turnover<br />

falls off. This case has been experienced mostly at capital intensive companies. On the other hand, it is normative to<br />

have this ratio high at the companies that have low fixed assets (for instance trading companies and financial<br />

companies) . Asset turnover is a significant indicator as a measure of the company’s profitability. When the other<br />

circumstances are fixed, having this ratio high in a firm means that it’s profitability ratio would also be high. The<br />

rate of profit in the future at the organizations ,which has a high long term assets (Fixed Assets) in consequence of<br />

low assets turnover (particularly organizations have new investments) , is indefinite because of depending on<br />

company’s demand changes. For this reason, we should use assets turnover as an indicator of risk. When the<br />

company’s assets turnover is prone to decrease, the idle capacity might be the main reason of the case.<br />

X16 variable refers to profit capital ratio measures companies’ performance of equity. Particularly comparing<br />

companies in common sector this ratio is useful. If the ratio is high, it is the indicator of the company’s right<br />

investment choices and ability to control related expenses.<br />

X17 variable refers to return on assets that have been used for measuring the efficiency provided by the size of<br />

the company. Return on assets indicates how efficient a company is at using the total assets including financial<br />

investments. Basically Return on Assets is a combination of net profit margin and asset turnover.<br />

5. Conclusion<br />

Market is shared by various enterprises that are characteristically different and SMEs are the noteworthy productive<br />

units for gaining sustainable innovation and competition. Financial crisis may have some forcing effects on SMEs<br />

and these effects may cause serious damaging effects in financial structure of the market. For the financial support,<br />

banks are generally the preferential choice of SMEs for the credit application in financial need. However, from the<br />

sight of the banks; a set of SMEs should be chosen to determine the enterprises to approve the application.<br />

Financial data of 145 SMEs that have done credit application in a public bank in 2010 which is liquidity crisis<br />

period is analyzed in the study. For interpreting the financial status of the enterprises, 19 characteristic financial<br />

ratios are selected at the outset. With Altman Revised Z score model the enterprises are divided into 3 zones in order<br />

to separate the safety degree of them in granting credits. 86 enterprises (59.3%) are in Zone 0 and they would not be<br />

able to pay any probable credit debts back. 18 enterprises are in Zone 1 (12.4%) that can not been clearly sure about<br />

the financial power for the payment. 41 enterprises (59.3%) are in Zone 2 that are in safety zone for the banks to<br />

grant credits. In order to analyze 19 different financial ratios to make the credit approval/denial decisions; 7 of these<br />

ratios are proposed as being more noteworthy for the SMEs to represent their financial ability of paying the debts<br />

488


ack. These 7 financial ratios are Liquidity Ratio, Short Term Liabilities/Total Assets, Short Term Liabilities/Active<br />

Assets, Long Term Liabilities/ Total Assets, Net Working Capital Turnover, Assets Turnover, Return On Assets.<br />

These ratios are the main indicators for the banks to determine the enterprises to grant credits. It is recommended for<br />

SMEs to check relevant ratios permanently to keep the competitive edge in financial markets and for the banks to<br />

pay more attention in financial decisions.<br />

6. References<br />

Alexeev, M., & Kim, S., (2008),The Korean financial crisis and the soft budget constraint ,Journal of Economic<br />

Behavior & Organization, 68(1),178-193<br />

Fok, D., & Franses, P.H., (2002), Ordered logit analysis for selectively sampled data, Computational Statistics &<br />

Data Analysis, 40, 477 – 497<br />

Joshi, M.G., (2005), Access to credit by hawkers: What is missing? Theory and evidence from India, Ph.D. Thesis,<br />

The Ohio State University<br />

Leon Li, M.,& Miu P, (2010), A hybrid bankruptcy prediction model with dynamic loadings on accounting-ratiobased<br />

and market-based information: A binary quantile regression approach, Journal of Empirical Finance ,<br />

17,818–833<br />

Nam, D., (1997), Econometric analysis of highway incident duration, public perceptions and information for<br />

advanced traveler information systems, Ph.D. Thesis. University of Washington<br />

Özturk, S. & Govdere, B., (2010), The effects of the global financial crisis and Turkish economy, SDU The Journal<br />

of Faculty of Economics and Administrative Sciences, 15(1), .377-397.<br />

Thomas Ng, S., & Wong, J.M.V, & Zhang, J,(2011), Applying Z-score model to distinguish insolvent construction<br />

companies in China, Hong Kong Habitat International<br />

Treacy, W.F., & Carey M., (2000) , Credit risk rating systems at large US banks, Journal of Banking & Finance,<br />

24, 167-201<br />

http://www.cersi.it/itais2010/pdf/084.pdf, accesed at 30.04.2011<br />

http://pages.stern.nyu.edu/~ealtman/Zscores.pdf, accessed at 13.04.2011<br />

http://ww2.coastal.edu/kingw/statistics/R-tutorials/logistic.html, accesed at 20.04.2011<br />

http://www.norusis.com/pdf/ASPC_v13.pdf, accessed at 25.04.2011<br />

489


PRODUCTS WITH LONG AGING PERIOD IN THE AGRO-FOOD SYSTEM: ANALYSIS OF MEAT<br />

SECTOR<br />

Mattia Iotti, Giuseppe Bonazzi, Vlassios Salatas, University of Parma, Italy<br />

Email: mattia.iotti@unipr.it , www.unipr.it<br />

Abstract. The study aims to analyze the difference between profit and cash flow generation in firms operating in the meat sector with<br />

long aging period of productions; this type of analysis could be useful in an applied field considering the long fresh meat aging period<br />

of some productions, as ham, even considering the level of capital requirement to finance investment in land, building and machinery<br />

in the sector. In fact, in general, this type of firm are often able to generate profit to provide a return to equityholders, but are not<br />

always able to generate sufficient cash flow to pay debts and distribute dividends. About this topic, the paper would examine a panel<br />

of firm in order to quantify the difference in applying an economic approach and a financial approach to evaluate investment,<br />

comparing the results deriving from ratio analysis and cash flow analysis. The expected result of the paper is an application of ratio<br />

analysis and cash flows analysis to be considered in small agro-food firm operating with aging of production.<br />

Keywords: Long aging period, Accrual and financial approach, Free cash flow, Cover ratio<br />

JEL classification: Q13 - Agricultural Markets and Marketing; Cooperatives; Agribusiness; Q14 - Agricultural Finance<br />

1 Introduction<br />

In Italy the meat sector is characterized by the herd of swine heavy pig that is bred to be processed for typical Italian<br />

cold cuts production, especially the typical ham (PDO ham); for these hams are used fresh legs of pigs born, raised<br />

and slaughtered in a defined area, and the pigs must have characteristics of quality defined in specific production<br />

rules. The firms operating in the agro-food system that produce long aging period food have to face problems related<br />

to the high level of capital requirement; often the firms have difficulties relating to the duration of the financial<br />

cycle, because the firms require large investments in start-up activity for the acquisition of industrial buildings,<br />

plants and equipments (Bonazzi et al., 2007); moreover, the capital requirement is inherent with the typical<br />

production aging period, that requires large volumes of capital, then expanding the capital requirement for<br />

equipment. In fact, the cycle of aging of the fresh meat causes a further expansion of capital requirements in order to<br />

sustain the cycle of working capital. Considering the channel of sales frequently used by firms in the sector, as to<br />

say namely large-scale distribution (GDO), it is to note an increase in average day extension in receiving payment<br />

from customer, and this aspect of financial dynamic improve capital requirement for processing firms. In order to<br />

comprehend the characteristics of management of this type of firm, it could be useful to compare economic and<br />

financial results, in order to consider the attitude of the firm to generate cash flow sufficient to sustain the business<br />

cycle, pay interest charge and distribute dividends to equityholders. In fact, in this type of firm, it is possible to note<br />

a dyscrasia between economic and financial result, even because of the long period of aging of production. About<br />

this topic, the paper would examine a panel of firm in the agro food system, operating in the long aging period food<br />

production, in order to quantify the difference in applying an economic approach and a financial approach to<br />

analyze the firm’s data in the sector, even comparing the results deriving from ratio analysis and cash flow analysis.<br />

2 Methodology<br />

2.1 Income statement analysis<br />

The annual account, based on the rules of the Fourth EU Directive, is a general base, required by law, to evaluate<br />

economic, financial and patrimonial aspects of the firm management; the annual account uses an accrual methods in<br />

order to quantify profit to equityholders, considering positive and negative voices of revenue on an accrual basis.<br />

This method allows to have attention to the creation of value and do not directly consider the moment in which it is<br />

possible to achieve the cash inflow or there is the cash outflow. The income statement, as a part of annual account, is<br />

used to express the economical result of a firm in term of profit, that is expressed using accrual methods. The<br />

income statement considers the moment in which the value is created; in this way, the analysis of profitability is<br />

performed using the analysis of annual economic accounts (Ferrero, et al., 2005). The annual accounts analysis<br />

considers the data presented in tables required by law (Andrei et al., 2006), as defined by the European Union and<br />

490


the national civil law in Europe (in Italy the civil code of commerce); in this way it is possible to compare data from<br />

different firm, even from different countries (Andrei et al., 2006). The firm income considers the accrual basis<br />

(Andrei et al., 2006) and expresses the moment of creation of value so the income statement is not dependent by the<br />

generation of cash flow from operations. The scheme required by law in Italy is:<br />

(1) Tω � ΔIα � Rβ � Sγ � Lδ � (D � A)ε � I � V � E � TX � Π<br />

In (1) T is turnover (sales), I is the stock (inventory), where � is annual variation in order to apply accrual basis<br />

analysis, R is raw materials, S is services, L is labour, D + A is depreciation and amortization, V is revaluations and<br />

devaluation, E is extraordinary income or expenses, TX is income tax, � is profit; � is a multiplicative parameter<br />

applied to respective quantity, in order to calculate the total amount of turnover (T�); respectively �, �, �, � and �<br />

are multiplicative parameter applied to respective quantity, in order to calculate �I� (non monetary cost), R� is raw<br />

materials cost, S� is services cost, L� is labour cost, (D + A)� is depreciation and amortization cost (non monetary<br />

cost). So it is (Lagerkvist et al., 1996):<br />

(2)<br />

Tω �<br />

I<br />

�<br />

i�1<br />

T ω<br />

i<br />

i<br />

; Iα �<br />

�<br />

j�1<br />

�<br />

k�1<br />

Where in (2) i, j, k, l, m and n are respectively the different items of product sold (i) and items of production factor<br />

bought (j, k, l, m and n). In (1) and (2) costs are joined having attention to the different nature of cost. It is possible<br />

moreover to express as follow (3) considering that R� and S� express the external cost (Ce) defined as cost to<br />

remunerate external production factor:<br />

(3) Ce<br />

� Rβ � Sγ<br />

It could be useful to note that the pre tax civil income (��) is different from the pre tax fiscal income (��) in Italy,<br />

because many type of cost with civil relevance have not fiscal relevance, it is:<br />

And then, considering that Cnfr is cost with not fiscal relevance, and considering a tax rate t, is:<br />

In order to obtain some more information about the capacity the generate source of finance, it is useful to apply the<br />

value added reclassification form of the income statement (Ceccacci et al., 2008), that is expressed as follows:<br />

(6) Tω � ΔIα � Ce<br />

� VA ; VA � Lδ � EBITDA ; EBITDA � (D � A)ε � EBIT ; EBIT � I � V � E � TX � Π<br />

In (6) EBITDA is earnings before interest, taxes, amortization and depreciation, EBIT is earnings before interest and<br />

taxes. EBITDA expresses the capacity to generate cash and sustain the financial cycle considering and intermediate<br />

income margin that is assumed to assimilate the creation of cash by income cycle, not considering non financial<br />

cost, as depreciation and amortization.<br />

2.2 Cash flow statement and balance sheet analysis<br />

J<br />

I α<br />

j<br />

j<br />

; Rβ �<br />

(4) Tω � ΔIα � Ce<br />

� Lδ � (D � A)ε � I � V � E �<br />

(5) (Πθ<br />

� Cnfr<br />

) � Πζ ; t � Πζ � TX<br />

K<br />

R<br />

k<br />

β<br />

Πθ<br />

k<br />

; Sγ �<br />

Then it could be applied the cash flows statement (Brealey et al., 2003) in order to quantify directly generation of<br />

cash, deriving this analysis from the annual account; cash flow statement is also useful to analyze different sources<br />

of cash flows (Shireves et al., 2000). It is that:<br />

In (7) CF is cash flow, OCF is operating cash flow, UFCF is unlevered free cash flow, FCFE is the cash flow<br />

available for equityholders (free cash flow). CF is profit (�) increased with costs that do not cause an outflow of<br />

money (D + A) and the impact of income taxes (TX); OCF quantifies the absorption of net working capital (NWC)<br />

and has a particular importance in the analysis of firms with great absorption of working capital, as long aging<br />

L<br />

�<br />

l�1<br />

S γ<br />

l<br />

l<br />

; Lδ �<br />

M<br />

�<br />

m�1<br />

L<br />

m<br />

δ<br />

m<br />

; (D � A)ε �<br />

�<br />

�<br />

(7) � � ( D � A) � TX � CF ; CF � Δ NWC � OCF ; OCF � Δ FA � UFCF ; UFCF�<br />

DS � FCFE<br />

491<br />

N<br />

�<br />

n�1<br />

(D � A) ε<br />

n<br />

n


period firm; UFCF is the sum of OCF and the absorption of capital resulting from investments in fixed assets (FA);<br />

UFCF is the cash flow available for debt service (DS, debt service, where DS = K + I, with K is the principal and I is<br />

interest charge). FCFE is the cash flow available for equityholders. The reclassification of the balance sheet<br />

(Ceccacci et al., 2008) is conducted according to liquidity level:<br />

(8) t<br />

c<br />

i<br />

ar<br />

TA � WC A � FA � WC A � WC A � WC A � FA<br />

In (8) TA is total asset, WCtA is total investment (asset) in working capital, WCiA is working capital in inventories,<br />

WCCA is working capital in cash, WCarA is working capital in account receivables, FA is fixed assets.<br />

Reclassification of balance sheet liabilities is conducted according to the origin of sources of capital:<br />

(9) TS � E � D � E � WC B � DF � DF<br />

In (9) TS total liabilities and equity, E is equity, D is total debt, WCt B is total working capital liabilities, DFs is<br />

short-term financial debts (due within 12 months), DFl is a medium/long term financial debts (duration over 12<br />

months). The difference between WCtA and WCtB is net working capital (NWC). Expressed in this way, the<br />

formulas allow to have a metric useful to compare economic and financial result, even considering traditional ratio<br />

analysis (Roe, Roa, EBITDA / I, EBIT / I).<br />

3 The meat sector<br />

t<br />

s<br />

l<br />

In Italy, the production of meat is one of the most important industry in the agro-food system, especially related to<br />

the production of pork and bovine meat. The Italian production of pigs in 2009 is of 12,922,000 animals; 8,707,362<br />

pigs are to produce cold cuts with protected designation of origin (PDO), on IPQ-INEQ data. The consumption of<br />

pork, in Italy, is 37.68 kg per capita in 2009 and the self-supply rate was 68.9%. Genetic selection operates with two<br />

addresses: for the cold cuts production, it is bred Italian Large White (LWI), Italian Landrance (LI) and Italian<br />

Duroc (DI) even for production of meat for butcher is used race Petrain (P). In Italy operate 121 slaughterhouses in<br />

the agro-food system (year 2008) that have slaughtered more than 9 million pigs; the maximum concentration of<br />

slaughterhouses is in Lombardia region (38), Emilia Romagna region (27) and Piemonte region (18). All<br />

slaughterhouses are inspected at least once a year to control information about the activity of slaughtering, in order<br />

to detect the incoming streams of live pigs and outflows of raw material. This procedure, associated to a selection,<br />

allows the exclusion of the fresh pork legs not suitable for production reducing the final index of non-compliance.<br />

The consumption of cured meats in Italy in 2009 was 1.1745 million tons, having that the ham is the first cold cuts<br />

for consumption in Italy (280.6 thousand tons), followed by cooked ham (275.8 thousand tons), Mortadella, with<br />

173.9 thousand tons, and Salame, with 110,4 thousand tonnes. In Italy production of cold cuts with PDO and PGI<br />

marks are 33, especially concentrated in northern regions. The Region that has the largest number of protected<br />

products is Emilia Romagna (11 protected products, 7 PDO and 4 PGI), Lombardia (3 PDO and 6 PGI) and Veneto<br />

(2 PDO and 4 PGI). In 2009 there were 8.680 milions slaughtered pigs certificates with PDO, mark 17.361 milions<br />

available thigh for PDO production, 14.550 milions thighs started to PDO production, of which 9,429,462 for the<br />

PDO Prosciutto di Parma and 2,521,213 for the PDO Prosciutto San Daniele.<br />

In 2008, the herds for PDO production were 4,819 in 11 regions of central and northern Italy. The highest<br />

concentration of farms is 1,936 farms in Lombardia, then we have Piemonte (970) and Emilia Romagna (926), so<br />

that 79.5% of herds is located in these three regions. For the distribution of pigs per genotype, on 2008 data, there is<br />

the prevalence of pigs from hybrid verro, representing 67.2% of the total.<br />

Even in the meat sector, the European Community has implemented a strategy of diversification of farm<br />

production in order to achieve a better balance between supply and demand in the markets, considering that<br />

production, processing and distribution of agricultural products and foodstuffs has an important role in the<br />

Community. Reg. (EC) No 510/2006 of 20 March 2006 regulates the protection of geographical indications and<br />

designations of origin for agricultural products and foodstuffs. The verification of compliance with specifications<br />

(Article 11) shall be performed before selling the product by one or more competent agency and / or one or more<br />

control agency within the meaning of art. 2 of Reg (EC) No 882/2004; the control agency operates as a product<br />

certification institution. Istituto Parma Qualità (IPQ), linked with Istituto Nord Est Qualità (INEQ) has implemented<br />

a system that provides control and compliance requirements for origin of raw materials and production process<br />

492


upstream in the chain. In this system of rules and controls, the farms must put the firm code and the month of birth<br />

code on both legs, in order to have the slaughter of animals with at least nine months of life; in this way it is possible<br />

within thirty days after the birth of the pig to exclude animals born outside the territory of origin. The transfer of<br />

animals between farms should be documented so it could be easier the control issued on that by IPQ and INEQ. The<br />

slaughterhouse must fill out a document for each day of production with a list of all lots of animals received and the<br />

number of pigs slaughtered, codes of origin and origin. Moreover, the slaughterhouse puts on each thigh a stamp of<br />

approval attesting to the code compliance of origin, provenance and quality.<br />

Raw material arrives at the cold cuts production firm with the mark of identification and self-certification of<br />

slaughter, having a copy of the document issued by the slaughterhouse. For every delivery, the production firm<br />

analyzes the conformity of raw material and mark in an official register in which is possible to find the description<br />

and all the identification elements of the product. IPQ and INEQ include these data in their database and check all<br />

the documents produced for each lot of production. It is the responsibility of IPQ and INEQ to check all the quality<br />

standards of the cured product by testing and verifying the minimum maturing period, the absence of morphological,<br />

technical and taste defects. After these procedures, PDO certification mark is applied to product and it could be<br />

marked on the product label.<br />

4 Impact on local socio-economic system<br />

The activities related to agriculture, processing industry and related services have a central role in the socioeconomic<br />

system in the province of Parma. The food industry is the first industry in the province, with a turnover<br />

(2008) of € 7,500 million, 36.6% of total industrial sales in the province, deriving € 973 million from exports. The<br />

turnover of the mechanical food plant industry, that is the third largest industrial sector of the province, has a level<br />

of € 2,200 million that is as 10.7% of total industry turnover. The sector of the food and plant for food, taken<br />

together, amounted to 47.3% of the industry turnover of the province. The food sector with the highest turnover is<br />

the sector of pasta, bread, pastries, frozen foods and related products, even considering the presence of some large<br />

companies; turnover of the sector is, in 2008, € 3,000 milion (40.0% of provincial revenues in the food industry and<br />

to 14.6% of industry turnover in general). The meat preserve industry generates 900 million euro turnover in 2008<br />

(25.3% of sales in the food industry and 9.3% of industry sales in general).<br />

The province of Parma has an important meat processing activities so the local socio-economic system is<br />

characterized by the presence of a large number of firms in processing pork meat, especially in the production of<br />

cold cuts. In the meat processing industry, having the analysis based on data made available by the Registry of firms<br />

at the Chamber of Commerce of Parma, operates 446 firms and 143 local units of firms operating in the meat<br />

processing industry in the province of Parma (10.13 main activity code of ATECO 2007 classification), as for a total<br />

of 589 units in the sector in the province of Parma. In the meat sector work 4,399 staff employees and 322<br />

independent operators, for a total of 4,721 unit of labour. The municipalities in the province with greater presence of<br />

meat industry are Langhirano (123 companies, 41 local units, 1,140 staff employees and 72 independent operators),<br />

Lesignano de' Bagni (38 firms, 10 local units, 352 staff employees, 18 independent operators) Felino (35 firms, 17<br />

local units, 634 staff employees, 44 independent operators), Sala Baganza (26 firms, 17 local units, 609 staff<br />

employees, 19 independent operators), 38 firms, 10 local units, 366 staff employees and 21 independent operators<br />

operate in the municipality of Parma.<br />

Parma PDO Ham is the most important production of cold cuts industry in the province of Parma and is<br />

produced observing production regulations issued by the Consorzio del Prosciutto di Parma ensuring the respect of<br />

EEC Regulation 2081/92 (now Regulation EC 510/06); Parma PDO Ham derive by processing thighs of heavy pig<br />

that must be older than 9 months of age, weighing over 150 kg. The pig must is bred in the territory of 10 regions of<br />

northern and central Italy but the production process must be done in one part of the province of Parma between the<br />

Via Emilia, at a distance of at least 5 km from north, by the river Enza, to the east, and by the river Stirone, from the<br />

west. The south limit of production is an altitude above sea level exceeding 900 meters. In addition to the Consorzio<br />

del Prosciutto di Parma, Istituto Parma Qualità (IPQ) conducts necessary quality checks on the ham as "third and<br />

independent part ".<br />

Analyzing the firm per rank size, basing the analysis on produced hams per year, there is a concentration of<br />

production in a small number of firms; with a total production analyzed of 9,429,642 hams processed in 181 plants,<br />

493


with an average production of 52,096 hams per plant, 58.85% of production is concentrated in 25.41%<br />

manufacturing plants that are characterized for an annual production of more than 100,000 hams. Manufacturing<br />

plants with less than 25,000 hams per year produced 5.62% of the hams, involving 34.25% of plants; moreover it is<br />

to note that the plants up to 1,000 pieces per year of production are 8.84% of the total (0.08% of production), while<br />

the plants up to 10,000 pieces per year of production are the 22.10% of the total, with only 1,14% of the number of<br />

Parma Ham in 2009.<br />

Production plant<br />

per rank size<br />

Ham<br />

(n.)<br />

Ham<br />

(%) Production plant (n.) Production plant (%)<br />

0 – 1.000 7,652 0.08% 16 8.84%<br />

1.001 – 10.000 136,870 1.45% 24 13.26%<br />

10.001 – 25.000 384,970 4.08% 22 12.15%<br />

25.001 – 50.000 2,067,900 21.93% 53 29.28%<br />

50.001 – 75.000 1,282,433 13.60% 20 11.05%<br />

75.001 – 100.000 2,161,614 22.92% 25 13.81%<br />

100.001 – 200.000 2,316,990 24.57% 17 9.39%<br />

> 200.000 1,071,033 11.36% 4 2.21%<br />

Total 9,429,462 100.00% 181 100.00%<br />

Source: IPQ<br />

Table 1. Production plant per rank size (2009)<br />

The consumption of Parma PDO Ham is for 79% on domestic market and 21% in foreign markets; 2,046,495 hams<br />

are exported in 2009 (12,662 tons) with an estimated turnover of 181 million euro. France and the United States are<br />

the most important foreign markets, and the whole European market accounts for 75.05% of exports, while the<br />

American continent is 21.30% of exports, of which 18.81% in the USA alone; exports to other states are modest,<br />

except Japan, that accounts for 4.28% of exports, approximately 87 thousand hams in 2008. During the last decade<br />

there was an increasing in consumption of Parma ham sliced and packaged in boxes for sale in the refrigerated<br />

counter. During the period 2005/2009 the increase in the number of meats sliced was equal to 83.2%, from 627,344<br />

to 1,149,574 and the relative packages production increased from 30.885 million of 2005 to 54.796 milion of 2009.<br />

The slicing process performed in the production chain make easier the consumption process, particularly in<br />

foreign markets where the process of slicing made at the store or directly from the consumer is not always carried<br />

out with the necessary expertise, thus penalizing the sale to final consumer. With regard to market, Parma sliced<br />

ham, with a total production of 6,010,930 kg of food (1,865,490 kg for domestic consumption, up to 31.03%, and<br />

4,145,440 kg, 68.97%, for export) confirms the presence of foreign demand for a product with a high level of<br />

service. Even with respect to exports of sliced Parma PDO Ham, there is a concentration of demand in some foreign<br />

markets, so the top 5 export destination markets are Britain, France, Belgium, Germany and the USA; these markets<br />

generate 79.41% exports (49.12% for the first two target markets). Even if the sliced product is concentrated in<br />

European market, that consumes 86.55% of exports, is also of relevance the USA market for the sliced product<br />

(8.83% of the export market for sliced).<br />

5 Data analysis<br />

In this sector, the paper analyze the annual accounts (year 2009) of a sample of 50 firms in the area of Parma Ham<br />

analyzing the balance sheet and income statement, on the basis of data made available by the local Chamber of<br />

Commerce.<br />

The analysis of the balance sheet shows an average TA of € 15.212 million to € 1.047 minimum and € 67.968<br />

maximum, average WCtA / TA is 63.91% and FA / TA è 36,09%; �WCiA / TA is 37.17%. The other components of<br />

working capital (WCcA+ WCarA) / TA, is 21.25%. The analysis around the sources of capital in the sample shows,<br />

on average, that leverage is 2.495 and DER is 1.495.<br />

494


Index Mean Median St. Dev. CV Min Max<br />

Inventories 5,655,549 3,877,189 6,047,882 0.940 5,002 27,966,350<br />

Account receivable 3,543,580 2.19,.453 3,805,939 0.931 125,932 17,823,799<br />

Cash 325,810 30,002 632,808 0.515 22 2,806,379<br />

Others 198,049 8,426 548,379 0.361 - 3,336,096<br />

Working capital (inv.) 9,722,988 8,055,316 9,215,128 1.055 371,600 41,655,870<br />

Fixed asset 5,489,484 3,404,495 6,090,518 0.901 108,167 34,006,538<br />

Total Asset 15,212,472 11,587,352 13,892,680 1.095 1,047,065 67,968,144<br />

Equity 6,096,607 4,678,132 5,738,354 1.06 295,708 24,674,954<br />

Financial Debt (s) 3,001,646 1,646,256 4,067,973 0.738 - 20,520,274<br />

Financial Debt (l) 2,490,703 904,519 3,488,198 0.714 - 13,155,378<br />

Working Capital (debt) 3,623,516 2,367,118 3,411,318 1.062 321,802 18,096,661<br />

Total Source 15,212,472 11,587,352 13,892,680 1.095 1,047,065 67,968,144<br />

Source: firm data and elaborations<br />

Table 2 – Investment and source (Firm Sample 50 firms – annual account 2009)<br />

Index Mean Median St. Dev. CV Min Max ≤0 >0<br />

EBITDA 458,565 305,627 620,765 0.74 - 192,552 2,750,309 5 45<br />

EBIT 384,754 196,573 570,816 0.67 - 646,613 2,107,160 9 41<br />

NET PROFIT 92,015 31,140 323,492 0.28 - 725,250 1,184,406 15 35<br />

CF 31,969 232,527 514,245 0.06 - 206,715 2,539,986 6 44<br />

OCF - 476,764 86,711 1,013,058 - 0.47 -1,683,759 3,800,912 21 29<br />

UFCF - 593,887 - 88,701 835,774 - 0.71 -1,785,798 3,113,437 30 20<br />

FCFE - 611,777 - 191,278 763,872 - 0.80 -2,110,005 2,088,789 33 17<br />

Source: firm data and elaborations<br />

Table 3 – Economic and financial margins (Firm Sample 50 firms – annual account 2009)<br />

is 0.459 million €, 5 cases of EBITDA negative; EBIT average is 0.384 million €, 9 cases of EBIT negative on 50<br />

firms; the average net profit is 0.092 million €, with 15 cases of negative net profit on 50 firms, so income analysis<br />

of 50 firms sample in the 2009 shows in the sample, 35 firms generate profits (� > 0) and 15 generate losses (� ≤<br />

0). In order to quantify the sources of liquidity, analyzing the creation of cash flow, data show a situation where<br />

average CF is 0.039 million euro, 6 cases of CF negative, OCF average is -0.477 million €, 21 cases of OCF<br />

negative on 50 firms; the average UFCF is -0.594 million €, with 30 cases of negative UFCF on 50 firms; average<br />

FCFE is -0.612 million €, with 33 cases of negative FCFE on 50 firms. The analysis shows a great absorption of<br />

capital in the working capital cycle, moreover having difficulty in generating a positive cash flow to serve debt<br />

(UFCF) and to distribute dividends to shareholders (FCFE). In particular, working capital has a substantial effect on<br />

the absorption of liquidity (5 cases of negative CF and 21 cases of negative OCF). It could be interesting to note that<br />

there is a quite significance difference between net profit and FCFE means applying a t test (two sample paired per<br />

means) as shown in table 4 with a significance level that is 92,61% (1-0,0739) in a 2 tails type analysis with a<br />

reliability 95. A certain level of difference in means appears applying a t test (two sample paired per means) as<br />

shown in table 4 with a significance level that is 88,75% (1-0,1125) in a 2 tails type analysis. A more clear<br />

difference appears analyzing equity ratios (Roe and FCFE / E); the level of difference in means appears applying a t<br />

test (two sample paired per means) with a reliability 95% as shown in table 4 with a significance level that is<br />

99,88% (1-0,0012) in a 2 tails type analysis.<br />

495


Analysis 1<br />

NET PROFIT<br />

Analysis 1<br />

FCFE<br />

Analysis 2<br />

ROE<br />

Analysis 2<br />

FCFE / E<br />

Means 216,3066 121,1119 0.845% -7.240%<br />

Variance 730,616 261,527 0.299% 3.433%<br />

Data 50 50 50 50<br />

Pearson 0.9795 0.4689<br />

Degree of freedom 49 49<br />

Stat t 1.8264 3.4276<br />

P(T 1;<br />

even more (UFCF-I)/DF is > 0 in 17 cases. It is to note that EBITDA / I has 45 case of value > 0 and 38 > 1, but<br />

OCF / I has 29 case of value > 0 and 25 > 1; applying to this couple of ratio the t test (two sample paired per means)<br />

in order to quantify the significant level of means difference, it is as shown in table 4 with a significance level that is<br />

86,55% (1-0,1345) in a 2 tails type analysis.<br />

Index Mean Median St. Dev. CV Min Max >0 >1<br />

ROE 0.8453% 1.0862% 5.4112% 0.16 -13.8454% 13.7024% 35 NU<br />

ROA 2.4759% 2.4617% 3.1800% 0.78 -4.2638% 11.9884% 41 NU<br />

EBITDA / I (interest) 133.136 2.653 621.967 0.214 - 21.738 4.232.818 45 38<br />

EBIT / I (interest) 38.502 1.919 156.143 0.247 - 40.264 893.108 41 38<br />

CF / I (interest) 105.316 1.883 521.214 0.202 - 23.032 3.617.948 44 32<br />

OCF / I (interest) 71.063 1.085 369.030 0.193 - 69.246 2.547.273 29 25<br />

UFCF / I (interest) -442.758 - 0.824 3.184.425 0.139 - 22.723.013 688.078 20 17<br />

(UFCF - I (interest)) / DF 7.780 - 0.048 49.726 0.156 - 1.163 326.150 17 2<br />

UFCF / DF 8.138 - 0.016 51.598 0.158 - 1.147 338.471 20 NU<br />

FCFE / PN - 0.072 - 0.055 0.183 - 0.395 - 0.561 0.452 17 NU<br />

Source: firm data and elaborations ; NU not useful<br />

Table 5 – Economic and financial ratios (Firm Sample 50 firms – annual account 2009)<br />

496


Analysis 3<br />

EBITDA<br />

Analysis 3<br />

OCF<br />

Analysis 4<br />

EBITDA / I<br />

Analysis 4<br />

OCF / I<br />

Means 458,564.62 294,736.44 133.1356 71.0631<br />

Variance 393,212,880,156 1,047,230,428,214 394,737 138,962<br />

Data 50 50 50 50<br />

Pearson 0.7221 0.9618<br />

Df 49 49<br />

Stat t 1.6163 1.5219<br />

P(T


ASS.I.CA. (2010), Rapporto Annuale 2009. Milano.<br />

Binsbergen J., Graham J., Yang J. (2008), The cost of debt. Duke University, Working paper.<br />

Bonazzi G. (2005), Prosciutto di Parma DOP e sistema dei controlli. In: Annali Facoltà Medicina Veterinaria di<br />

Parma, Anno XXIV, 2004, Università degli Studi di Parma, Parma.<br />

Bonazzi G., Iotti M., Salatas V. (2010), Financial Analysis of the Parma PDO Ham Firms. In: Proceedings of the 7th<br />

International Conference on Applied Financial Economics, INEAG, Samos Island, Greece.<br />

Brealey R.A., Myers S.C., Sandri S. (2003), Principi di finanza aziendale. Milano, McGraw-Hill.<br />

Ceccacci G., Camanzi P., Rigato C. (2008), Basilea 2 per piccole e microimprese, Milano, Edizioni FAG.<br />

Cleary S. (1999), The Relationship between Firm Investment and Financial Status, Journal of Finance, 54, 673-692.<br />

Damodaran A. (1994). Damodaran on Valuation. Security Analysis for Investment and Corporate Finance. New<br />

York, John Wiley & Sons.<br />

Ferrero F., Dezzani F., Pisoni P., Puddu L. (2005), Le analisi di bilancio. Indici e flussi. Milano, Giuffrè.<br />

Henry D. (1996), Cash Flow and Performance Measurement: Managing for Value, Financial Executives Research<br />

Foundation. New York, Morristown.<br />

Iotti M. (2009), La valutazione degli investimenti industriali. Milano, Franco Angeli<br />

IPQ-INEQ, (2002), Certificare l’origine della qualità, IPQ-INEQ, Parma.<br />

Kurshev A., & Strebulaev I. A. (2006), Firm size and leverage. Stanford, Stanford University.<br />

Lagerkvist C.J., Andersson H. (1996), Taxes, inflation and financing – the rate of return to capital for the<br />

agricultural firm, European review of agricultural economics, 23, 437 – 454.<br />

Lewellen J.W., (2004), Predicting returns with financial ratios, Journal of Financial Economics 74, 209-235.<br />

Sciarelli S. (2004), Fondamenti di economia e gestione delle imprese. Padova: CEDAM.<br />

Shireves R.E., Wachowicz J.M. (2000), Free Cash Flow (FCF), Economic Value Added (EVATM), and Net Present<br />

Value (NPV): A Reconciliation of Variations of Discounted-Cash-Flow (DCF) Valuation. Knoxville:<br />

Tennessee University Press.<br />

498


EMPLOYEE STOCK OPTIONS INCENTIVE EFFECTS: A CPT-BASED MODEL<br />

Hamza BAHAJI, DRM Finance,Université de Paris Dauphine, France<br />

Email: hbahaji@yahoo.fr<br />

Abstract. This paper examines the incentives from stock options for loss-averse employees subject to probability weighting. Employing the<br />

certainty equivalence principle, I built on insights from Cumulative Prospect Theory (CPT) to derive a continuous time model to value<br />

options from the perspective of a representative employee. Consistent with a growing body of empirical and experimental studies, the model<br />

predicts that the employee may overestimate the value of his options in-excess of their risk-neutral value. This is nevertheless in stark<br />

contrast with a common finding of standard models based on the Expected Utility Theory (EUT) framework that options value to a riskaverse<br />

undiversified employee is strictly lower than the value to risk-neutral outside investors. In particular, I proved that loss aversion and<br />

probability weighting have countervailing effects on the option subjective value. In addition, for typical setting of preferences parameters<br />

around the experimental estimates, and assuming the company is allowed to adjust existing compensation when making new stock option<br />

grants, the model predicts that incentives are maximized for strike prices set around the stock price at inception. This finding is consistent<br />

with companies’ actual compensation practices that standard EUT-based models have difficulties accommodating their existence.<br />

Keywords: Stock options, Cumulative Prospect Theory, Incentives, Subjective value.<br />

JEL Classification: J33, J44, G13, G32, M12<br />

1 Introduction<br />

Instead of an increasing interest for restricted stock and performance unit plans, the 2006 Hewitt Associates Total<br />

Compensation Measurement survey has revealed that stock options are still the most prevalent long term incentive<br />

vehicle 1 . The stated argument for the large use of executive stock options is that they align the interests of executives<br />

and shareholders since they provide incentives for the manager to act in order to increase the firm value. The use of<br />

stock options has even overtaken the traditional arena of executive population. Actually, firms’ compensation<br />

practices show that stock options are issued to reward non-executive employees as well. In order to figure out the<br />

reason why stock options may be attractive to employees it is crucial to assess the utility they receive from them.<br />

Moreover, understanding how employee evaluates its stock options (i.e. their subjective value) allows assessing their<br />

incentive power and the implied employee behaviour in terms of risk taking.<br />

Most of the theoretical literature on stock options relies on the Expected Utility theory (EUT henceforth)<br />

framework to derive models of option value from the employee perspective (Lambert et al, 1991; Hall and Merphy,<br />

2000, 2002; Henderson, 2005). These models predict that the nontransferability of the options and the hedging<br />

restrictions faced by the employee make him value his options below their issuance cost born by the company (i.e.<br />

their risk neutral value). Moreover, standard normative models fail to predict stock options as part of the<br />

compensation contract. Several quantitative studies taking place in principal-agent framework showed that EUTbased<br />

models predict optimal compensation contracts which do not contain convex instruments like stock options<br />

(Holmstrom and Milgrom, 1987; Dittmann and Maug, 2007).<br />

This paper analyzes the valuation of stock options and their incentives effect to an employee exhibiting<br />

preferences as described by Cumulative Prospect Theory (Tversky and Kahneman, 1992). It aims to propose an<br />

alternative theoretical framework for the analysis of pay-to-performance sensitivity of equity-based compensation<br />

that takes into account a number of prominent patterns of employee behavior that standard EUT cannot explain. This<br />

work is motivated by recent empirical and theoretical researches on employee compensation incorporating CPTbased<br />

models (Dittmann et al., 2008; Spalt, 2008). These models have proved successful in explaining some<br />

observed compensation practices, and specifically the almost universal presence of stock options in the executive<br />

compensation contracts that the EUT models have difficulties accommodating their existence. Therefore, they have<br />

advanced CPT framework as a promising candidate for the analysis of equity-based compensation contracts.<br />

1 80% of the responding participant companies to the survey have reported that stock option grants represent in 2006, on average, about 54% of<br />

their global long-term incentives.<br />

499


I drew on this theoretical framework to derive a continuous time model of the stock option subjective value<br />

using the certainty equivalence principle. I then performed sensitivity analyses with respect to preferences-related<br />

parameters and found that loss aversion and probability weighting have countervailing effects. In particular, I proved<br />

that the option subjective value is increasing in probability weighting degree and decreasing in loss aversion. My<br />

analyses also show that, for a given level of option moneyness, the subjective value of the option may lies strictly<br />

above the Black and Scholes value (BS henceforth) when the effect of probability weighting tends to dominate that<br />

of loss-aversion. These results lead to the conclusion that the lottery-like nature of stock-options, combining large<br />

gains with small probabilities, may make them attractive to employees subject to probability weighting which is<br />

consistent with the proposition that employee option value estimate may exceed the BS value (Lambert and Larcker,<br />

2001 ; Hodge et al., 2006 ; Sawers et al., 2006 ; Hallock and Olson, 2006 ; Devers et al., 2007).<br />

Furthermore, this work elaborates on incentives from stock options and on some implications in terms of design<br />

aspects. Following previous researches, I defined incentives as the first order derivative of the subjective value with<br />

respect to the stock price. A numerical analysis of the incentive function shows that stock option incentive effects<br />

are increasing in employee’s degree of probability weighting and may even lie above incentives for a risk-neutral<br />

individual. Moreover, I considered the incentive effects of setting the strike price of the option above or below the<br />

stock price at inception. In this analysis, I relied on Hall’s and Murphy’s (2002) methodology in solving for the<br />

exercise price that maximizes incentives holding constant the company cost of granting the options. I used this<br />

approach to explore the situation where the company is allowed to adjust existing compensation when making new<br />

stock option grants. For typical setting of preferences parameters around the experimental estimates from CPT<br />

(Tversky and Kahneman, 1992), the model predicts that incentives are maximized for strike prices set around the<br />

stock price at inception, which is consistent with companies’ actual compensation practices. Additional analyses<br />

suggest also that loss-averse employees who are not subject to probability weighting, or even with very week<br />

degrees of probability weighting, receiving options at high exercise price would willingly accept a cut in<br />

compensation to receive instead deep discount options or restricted shares for those of them displaying more lossaversion.<br />

This result is broadly consistent with Hall and Murphy (2002) and Henderson (2005) findings for nondiversified<br />

risk-averse employees.<br />

This article proceeds as follows. The first section describes the features of stock option value from the<br />

perspective of a representative employee with preferences as described by CPT. Throughout this paper, we will refer<br />

to this employee as a “CPT employee”. This section provides also numerical analyses on the model sensitivity to<br />

preferences-related parameters. The next section introduces incentive effects of stock options for a CPT-employee<br />

and examines some design implications in terms of strike price setting. The risk taking incentives question is<br />

explored in the third and last section. Appendices provide proofs of the propositions in the first section.<br />

2 Stock option value from a CPT-employee perspective<br />

In this section, I develop a base-case model for analyzing the value of the stock-option contract from the perspective<br />

of a representative employee with CPT-based preferences (subjective value henceforth). Specifically, I assume that<br />

the employee is granted a European call option on the company’s stock, denoted by S, with maturity date t=T and<br />

strike price K. These are the traditional features of executive stock options as reported in Johnson and Tian (2000)<br />

and used by prior studies focused on stock option incentives (Lambert et al., 1991; Hall and Murphy, 2002;<br />

Henderson, 2005). Often in practice, stock options are Bermudan-style options. Thus, my model relies on a naïve<br />

setting in that it ignores complications related to early exercise or forfeitures.<br />

2.1 Theoretical framework<br />

2.1.1 Stock-option contract<br />

The stock option contract is issued in t=0. The contract payoff at expiry, t=T, is � �<br />

h S K �<br />

� � . I make the<br />

T T<br />

assumption that the employee is not allowed to short-sell the company stock and that he can earn the risk-free rate r<br />

from investing in a riskless asset. Moreover, the price dynamic of the stock is given by a geometric Brownian<br />

motion represented by the following SDE:<br />

dS � r � q S dt � � S dZ<br />

(1)<br />

� �<br />

t t t t<br />

500


2<br />

Z is a standard Brownian motion with respect to the probability measure IP. � and q are respectively the total<br />

t<br />

variance of the stock price returns and the dividend yield.<br />

2.1.2 Risk preferences<br />

Following Tversky and Kahneman (1992), I consider that, to each gamble with a continuous random outcome<br />

y IR<br />

f y , the employee assigns the value:<br />

� which the probability density function is denoted by � �<br />

E � y� v � y�d f � y�<br />

�<br />

� � � �<br />

IR<br />

� �<br />

Note that the expectation E �. � � is a function of two distinct functions. The first function, �. �<br />

function, is assumed of the form:<br />

�<br />

� � ;<br />

� �<br />

(2)<br />

v �<br />

, called the value<br />

�<br />

� y �� y � �<br />

v � y � � � � , where 0 � � � 1and<br />

� � 1 (3)<br />

�<br />

�� �� � � y ; y � �<br />

This formulation has some important features that distinguish it from the standard utility specification. First, the<br />

utility is defined over gains and losses assessed based on the reference point denoted by � . The second important<br />

feature is the shape of the value function. While it is convex over losses, it is concave over gains, which represents<br />

the observation from psychology that people are risk averse over gains and become risk-seeking over losses.<br />

Moreover, the value function has a kink at the origin introduced by the parameter � � 1.<br />

This feature, known as<br />

loss-aversion, gives a higher sensitivity to losses compared to gains. Finally, outcomes are treated separately from<br />

other components of wealth, which reflects the well-documented phenomenon of narrow framing (Thaler, 1999).<br />

The second function, �. a, b �<br />

by the cumulative probability function F �. � , in order to transform them into decision weights according to:<br />

a<br />

� ��1 � F � y��<br />

�<br />

1<br />

�<br />

a a a<br />

�� �F � y� � (1 � F � y�)<br />

�<br />

� , �F a b � y��<br />

� � b<br />

� F � y�<br />

�<br />

1<br />

b � b b<br />

�F � y� � (1 � F � y�)<br />

�<br />

; y � �<br />

; y � �<br />

where 0.28 � a � 1 and 0.28 � b � 1 (4)<br />

� , is called the weighting function. It applies to cumulative probabilities, represented<br />

��<br />

This function stands for another piece of CPT, which is the nonlinear transformation of probabilities.<br />

Specifically, it captures experimental evidence on people overweighting small probabilities and being more sensitive<br />

to probability spreads at higher probability levels. The degree of probability weighting is controlled separately over<br />

gains and losses by the weighting parameters a and b respectively. The more these parameters approach the lower<br />

boundary at 0.28 the more the tails of probability distribution are overweighted. For instance, when a=b=1,<br />

probability weighting assumption is relaxed. For simplicity, these parameters are assumed to be equal (i.e. a=b) and<br />

� . in the rest of this paper. Finally, note that the lower boundary at 0.28 is<br />

the weighting function will be denoted � �<br />

�p<br />

a<br />

a technical condition to insure that ��a<br />

, b � p�<br />

is positive over ]0,1[ as required by the following first order condition:<br />

a<br />

a<br />

�a � p �a � p� p�� p�<br />

�1 � 1� � 1� � 0 .<br />

2.2 Stock option subjective value<br />

2.2.1 The model<br />

In order to estimate the subjective value of the stock option contract described above, I use the certainty equivalence<br />

principle. In particular, this value is defined as the cash amount, C , that leaves the employee indifferent between<br />

� ,a<br />

501


this amount and the uncertain payoff of the contract, h , irrespective of the composition of the remainder of his<br />

T<br />

private wealth. Formally, C is the solution of the following equation:<br />

� ,a<br />

� , a<br />

rT � � IP � �a<br />

� T ��<br />

= � v �hT � d�a �F �S �<br />

T ��<br />

v C e E v h<br />

� � �<br />

IR<br />

The left-hand side of the equation above represents the benefit for the employee from receiving the cash amount<br />

C instead of the stock option contract at the inception of the latter. This amount is assumed to be placed into the<br />

�<br />

,a<br />

risk-free asset over the whole lifetime T of the stock option contract. The other side of the equation gives the<br />

expected utility to the employee from receiving the risky payoff implied by the value function v �. � � . Here, the<br />

expectation relies on the transformed probability measure IP� . Let E represent the expectation in the right-<br />

� ,a<br />

EIP� a<br />

hand side of (5). It follows from (5) and (3) that:<br />

� � 1<br />

�� � � �rT<br />

��� � E e , if E 0<br />

� , a � � � , a<br />

��<br />

� �<br />

C � 1<br />

� , a �� �<br />

1 �<br />

�� � � �rT<br />

� � � E �e<br />

, otherwise<br />

� , a<br />

�� �<br />

�<br />

�<br />

� � �<br />

�� �<br />

�<br />

Where E should write:<br />

� ,a<br />

With:<br />

1<br />

��<br />

x<br />

� �<br />

, a a<br />

I � g x � K �� � � u du � x dx<br />

� � �<br />

l1<br />

� �� �<br />

E � I � I � I (6.1)<br />

1 2 3<br />

� , a � , a � , a � , a<br />

1<br />

�<br />

�<br />

2<br />

� � � � � � � � � � � (6.2) ; I � �� �K �� � g , a � x�� �� � a �u� du � � x� dx<br />

� � � � � (6.3)<br />

l2<br />

� �<br />

3<br />

�<br />

� ��� � �<br />

� , a a � � � �<br />

�<br />

l2<br />

a<br />

(5)<br />

(6)<br />

l x<br />

2 � � �<br />

�� r�q� ��<br />

T ��<br />

T x<br />

� �<br />

� �� �<br />

I<br />

�<br />

� ��<br />

x dx<br />

�<br />

�<br />

� (6.4) ; g( x) � Se<br />

2 � (6.5)<br />

2<br />

� S � � � �<br />

ln � r q T<br />

K �<br />

� � � � �<br />

2<br />

�<br />

� � �<br />

l1<br />

� �<br />

� �<br />

� T<br />

2<br />

� S � � � �<br />

ln � r q T<br />

K<br />

� � � � �<br />

2<br />

�<br />

� �<br />

(6.6); l2<br />

� �<br />

� �<br />

� T<br />

(6.7)<br />

Here� �. � is the Gaussian density function and � a � p�<br />

2.2.2 Reference point set up<br />

��a<br />

� �<br />

�p<br />

� p�<br />

is the first-order partial derivative of �. a �<br />

� .<br />

Although CPT specifies the shape of the value function around the reference point, it does not provide guidance on<br />

how people set their reference points. Neither does most of the psychological literature relying on the assumption<br />

according to which the reference point is the Status quo. Instead, this literature admits both the existence and the<br />

importance of non-status quo reference points since “there are situations in which gains and losses are coded<br />

relative to an expectation or aspiration level that differs from the status quo” (Kahneman and Tversky, 1979).<br />

In principle, employee would update the reference point in a way that fits with his own expectations regarding<br />

the underlying share price at expiry. Intuitively, the employee could estimate the intrinsic value of the option based<br />

on his future share price forecasts as he can rely on the BS value of the option disclosed by the firm. Following Spalt<br />

(2008), I consider that the reference point parameter in the model,� , is the BS value 2 . Beyond the argument of<br />

empirical evidence on employee exercise behaviour depending on non-status quo reference points (Huddart and<br />

Lang, 1996; Heath et al, 1999), this assumption is supported by firms common practices in terms of stock option<br />

2 To be more precise, the value used here is the expectation of the option payoff at expiring yielded by the BS model (i.e. the nondiscounted BS<br />

value). Consistent with this specification, the probability measure IP used to derive the subjective value in (6) is the risk neutral probability<br />

measure. Moreover, ignoring the probability weighting feature (i.e. a=1), this setting allows the subjective value implied by the model to<br />

converge towards the risk neutral value (i.e. BS value) when the preferences of the employee tend to risk neutrality (i.e. α=λ=1).<br />

502


compensation. Most of stock-option designers use the BS model in order to estimate the value of stock options as<br />

constituents of the total compensation package. This value is usually announced to the employee at the inception of<br />

the options. Moreover, the BS model is recommended in the FASB and the IASC guidelines for determining the fair<br />

value of stock options (i.e. the amount an outside investor, with no hedge restrictions, would pay for the option) that<br />

needs to be disclosed in the financial statements. These statements, comprising the BS value of the stock options, are<br />

provided to shareholders as well as stakeholders including employees.<br />

2.2.3 The impacts of preferences-related parameters: a numerical analysis<br />

To provide a concrete outline on the profile of the subjective value yielded by the model relative to the risk neutral<br />

value profile, I performed a numerical analysis 3 of the value of a 4-year call option (T=4) with a strike price K=100.<br />

For the remaining option-related parameters, the figures were computed assuming no dividend payments<br />

(q=0%), σ=30% and r=3%. Moreover, I set the curvature parameter of the value function (α) and the loss aversion<br />

coefficient (λ) to respectively 0.88 and 2.25 based on experimental estimates from CPT (Tversky and Kahneman,<br />

1992). Furthermore, in order to calibrate the probability weighting function, I used three different values of the<br />

parameter a within the range of values estimated in the experimental literature 4 .<br />

Figure 1 depicts the option value as a function of the stock price. The three blue curves represent the value<br />

profiles from the perspective of three CPT employees with the same value function and different degrees of<br />

probability weighting. At first sight, depending on the degree of probability weighting and the option moneyness<br />

(S/K) the subjective value could lie either under or below the BS value. In contrast, standard EUT-based models<br />

predict that the option value from a risk-averse employee perspective is systematically lower than the risk-neutral<br />

value (Hall and Murphy, 2002; Henderson, 2005). These preliminary results are consistent thought with some<br />

empirical findings suggesting that frequently employees are inclined to overestimate the value of their stock-options<br />

compared with the BS value (Lambert and Larcker, 2001; Hodge et al., 2006; Sawers et al., 2006; Hallock and<br />

Olson, 2006; Devers et al., 2007). On the other hand, the results presented in figure 1 show that the option subjective<br />

value is increasing in probability weighting degree (i.e. decreasing in parameter a). Actually, given the asymmetric<br />

profile of the option payoff, the expectation E in the subjective value formula (6) is positively affected by the<br />

� ,a<br />

emphasis put on the tail of the payoff distribution which is governed by the parameter a : the lower a the more<br />

overweighed will be small probabilities and the more underweighted will be medium to large probabilities. This<br />

lottery-like nature of an option, combining large gains with small probabilities, may make it attractive to a CPT<br />

employee subject to probability weighting. This preliminary outcome leads to proposition 1 which stats that:<br />

Proposition 1: the value of the stock option contract to a CPT- employee is increasing with respect to his<br />

degree of probability weighting (i.e. is decreasing with respect to a) 5 .<br />

Moreover, the effect of probability weighting is expected to increase with the skewness of the distribution of the<br />

underlying stock price, which is captured by the volatility parameter σ given the Log-normality assumed in the<br />

model. In order to show that, I performed a numerical analysis of the sensitivity of the subjective value to the<br />

probability weighting degree as a function of the volatility σ and the parameter a. This sensitivity is defined as the<br />

partial derivative of subjective value with respect to a. The results are reported in figure 2 in the form of a graph. It<br />

shows that the sensitivity to parameter a is negative and locally decreasing in volatility. That means that the more<br />

the share price is volatile the more the option will be attractive for a CPT employee subject to probability weighting.<br />

This supports Spalt’s (2008) findings that the effect of probability weighting provides an economic rationale to<br />

riskier firms (i.e. more volatile firms) for granting more stock options to non-executive employees.<br />

Furthermore, I investigate the effect of loss aversion on the subjective value. The variable of interest here is λ.<br />

In an analogical sense with the EUT framework, the option value from the perspective of a loss-averse employee is<br />

expected to decrease with his degree of loss-aversion. To verify this I computed numerically the first order<br />

derivative with respect to λ crossed over various levels of λ and moneyness, ranging from 0.05 to 1 and from 5% to<br />

3 The integrals in (6.2), (6.3) and (6.4) were computed numerically.<br />

4 Tversky and Kahneman (1992) got 0.65 on average (a=0.61 for gains and b=0.69 for losses). These results are corroborated by Abdellaoui’s<br />

(2000) findings (a=0.60 for gains and b=0.70 for losses, hence an average of 0.65). In addition, Camerer and Ho (1994) obtained a=0.56 for gains<br />

whereas Gonzales and Wu (1996) and Bleichrodt and Pinto (2000) found a=0.71 and a=0.67 respectively.<br />

5 Proofs are available in appendices A and B of Bahaji H.(2010), Incentives from stock option grants: a behavioural approach, Working Paper,<br />

DRM Finance, Paris Dauphine University. Available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1734831.<br />

503


200% respectively. The outcome is reported in figure 3. It shows that the sensitivity to loss aversion is negative and<br />

locally decreasing in moneyness. That means that the more the employee is loss averse, the less would the option be<br />

worth to him. This conclusion is taken up in proposition 2 hereafter:<br />

Proposition 2: the value of the stock option contract to a CPT- employee is decreasing with respect to his<br />

degree of loss-aversion (i.e. a decreasing function of λ). (For proofs see footnote 5)<br />

I performed a similar analysis in order to get a view on the effect of the curvature parameter α. In the same way,<br />

I assessed locally the first order derivative with respect to α within a range of values from 0.05 to 1 and using share<br />

prices ranging from 10 to 200. While not formally reported in this paper, the results show that this derivative is<br />

locally increasing with the option moneyness for values of α above say 0.75. It also shows broadly that the<br />

subjective value is an increasing monotone function of α over a range of values around the experimental estimate of<br />

0.88 (from 0.7 to 1) irrespective of the option moneyness.<br />

Figure-1: Option value against the stock<br />

price. This figure is a plot of the<br />

subjective value computed under<br />

different probability weighting<br />

parameters (the blue curves). It illustrates<br />

the profile of the subjective value<br />

compared to that of the risk neutral value<br />

(red curve). The Parameters used are:<br />

T=4; K=100; σ=30%; r=3%; q=0%;<br />

�<br />

3 Incentives from stock-options<br />

Figure-2: Sensitivity to probability<br />

weighting. This figure is a plot of the<br />

partial derivative of the subjective value<br />

with respect to “a” against both “a” and<br />

the stock price volatility “σ”. It exhibits<br />

the local effect of probability weighting<br />

given the payoff distribution skewness<br />

captured by “σ”. The derivative was<br />

computed numerically based on the<br />

following parameters: T=4; K=S=100;<br />

r=3%; q=0%; �=2.25; α=0.88.<br />

Figure-3: Sensitivity to loss aversion.<br />

This figure is a plot of the partial<br />

derivative of the subjective value with<br />

respect to “λ” against both “λ” and the<br />

stock price “S”. It exhibits the local<br />

effect of loss aversion given the option<br />

moneyness. The derivative was<br />

computed numerically based on the<br />

following parameters: T=4; σ=30%;<br />

K=100; r=3%; q=0%; a=0.65; α=0.88.<br />

Stock-options are incentive tools used within a principal-agent relationship to align the interests of the agent<br />

(employee) on those of the principal (shareholders). The shareholders grant stock options in order to provide the<br />

employee with incentives to make efforts that enhance the value of the firm, and thus their own wealth. Indeed,<br />

assuming that employees are aware of how their actions affect the share price, option holdings will prompt them to<br />

make efforts that increase share price. Therefore, the incentive from a single option grant will depend on the degree<br />

of the sensitivity of the subjective value to the stock price.<br />

3.1 The incentive measure<br />

Following Jensen and Murphy (1990), Hall and Murphy (2000, 2002) and others, I defined the incentive effect as<br />

the first order derivative of the subjective value with respect to share price which defines how the value from the<br />

employee perspective changes with an incremental change in the stock price. A preliminary numerical analysis<br />

relying on the setting reported in §2.2.3 shows that incentives are greatest for in-the-money 6 options and increasing<br />

6 The terminology “at-the-money” is referring to stock-options with an exercise price equal to the stock price at inception. The expressions “Outof-<br />

the-money” and “In-the-money” are also used throughout the paper to refer to options with strike price respectively below and above the grant<br />

date stock price.<br />

504


with the degree of probability weighting. Another result from the analysis is that, for sufficiently high level of<br />

probability weighting, the option can give much more incentive to increase stock price than is reflected by the BS<br />

delta. This is consistent with my previous finding that the subjective value can overstate the BS value. In addition,<br />

consistently with the EUT-based models, the analysis results suggest that when the employee process probabilities<br />

in a linear way (a=1) - which means that the probability weighting assumption is relaxed and only loss-aversion<br />

matters - the incentives lie strictly under the BS delta whatever the level of the option moneyness. That is to say that<br />

the options are less attractive for both risk-averse employee and loss-averse employee who are not subject to<br />

probability weighting. Furthermore, with a hold constant at 0.65, I find that incentives are decreasing in lossaversion.<br />

Conversely, with loss-aversion parameter λ set to 1, which means that the loss-aversion effect is<br />

neutralized, incentives for a representative loss-neutral employee, with a degree of probability weighting equal to<br />

experimental estimate of 0.65, overstate the BS delta.<br />

3.2 Implications for stock option design: optimal strike price<br />

Setting the strike price of standard stock options boils down to defining the threshold against which the performance<br />

is assessed and, consequently, to determining the likelihood of a final payout. As stated in the previous section,<br />

incentives increases in the option moneyness and, equivalently, decreases with strike price. In parallel, from the<br />

shareholders perspective, granting in-the-money options is much more costly than granting out-of-the money or atthe-money<br />

options (recall figure 1). This leads the firm to a trade-off to make when setting the exercise price of the<br />

options in the sense that, holding his cost unchanged, she could either grant fewer options at a low strike price or<br />

increase the grant size at higher exercise price.<br />

I relied on the methodology from Hall and Murphy (2002) in figuring out the optimal exercise price satisfying<br />

the double purpose of maximizing incentives and holding constant the firm’s cost of granting options. I considered<br />

the situation where the employee and the firm are allowed to bargain efficiently over the terms of the compensation.<br />

Thus, the firm is assumed to fund additional options by an adjustment to other compensation components that leaves<br />

the employee indifferent between his initial package and the new package including the additional grant.<br />

Let’s considerer then that the company is allowed to make an efficient adjustment to existing compensation<br />

components (cash for example) to grant additional options to the employee. The impact of this adjustment should be<br />

neutral with regard to the total compensation cost for the company. Moreover, assuming this adjustment involves<br />

cash compensation, it must be attractive to the employee so that he’d be willing to give up some cash compensation<br />

against extra options grant. Therefore, it must leave the employee at his initial total subjective value of the<br />

compensation package. It follows that the strike price that maximizes total incentives for a given company cost is the<br />

solution of this following optimization problem:<br />

�nC<br />

, a �K � �<br />

max subject to n� �K � � nC ,a �K � � c and �<br />

n � 0 (7)<br />

K �S<br />

Where<br />

� ,a<br />

� �<br />

�nC<br />

K<br />

denotes the incentives from receiving n options with a strike price K , � �K � is the per-unit<br />

�S<br />

cost or the BS value of one option and c is a fixed constant. The constraint in (7) is the aggregation of the company<br />

cost constraint and the employee value constraint used in Henderson (2005). This optimization problem was solved<br />

numerically by varying the parameter K. First, the BS and the subjective values are computed for a given K which<br />

enables to determine the grant size n in accordance with the constraint in (7) for K≠S. Then, the cost function is<br />

assessed based on n and K. This procedure is reiterated recursively until the optimal value of K is found. In this<br />

analysis, c was chosen such that for retained parameters, the number of granted at-the-money options n is around<br />

1000, hence a total cost of €28 333.<br />

The left-hand side sub-figure in figure-4 exhibits total incentives for different levels of K and probability<br />

weighting parameter a, with risk aversion held constant at λ=2.25. For each combination of K and a the constraint in<br />

(7) is solved for n, which allows to determine total incentives. The plots indicate that when the employee is deeply<br />

subject to probability weighting (a≤ 0.475), total incentives are strictly decreasing throughout the depicted range of<br />

strike prices (see the curves in blue). In this case, similar to EUT-based studies’ findings (Hall and Murphy, 2002;<br />

Henderson, 2005), incentives are maximized through restricted stocks grant rather than stock options. The intuitive<br />

reason behind that is that, given that the employee systematically values the options in excess of their BS value,<br />

505


efficient trade-off over compensation allocation is made via the grant of equity-based instruments that employee<br />

values at their actual cost, restricted stocks for instance. However, for lower degrees of probability weighting<br />

(typically 0.475


*<br />

the effect of loss-aversion on K by holding a constant at 0.65 and varying λ. It mainly shows that loss-aversion has<br />

an effect opposite to that of probability weighting: when probability weighting effect is dominant 8 (i.e. the employee<br />

may potentially put overstated value in the option, specifically for high strikes) optimal strike increases and total<br />

incentives decreased with loss-aversion. Conversely, when loss-aversion effect is dominant, as stated before, the<br />

model yields predictions comparable to that of the EUT-based models.<br />

4 Conclusion<br />

This paper proposes an alternative theoretical model of stock option subjective value to analyze their incentive<br />

effects for employees. The model predictions ascertain the ability of CPT to explain some prominent incentive<br />

patterns that EUT models have difficulties to capture. It mainly provides arguments on the well documented<br />

tendency of employees to frequently value - under some circumstances - their options in excess of their cost to the<br />

company. Specifically, these results highlight the economic rational for firms, in particular those with higher risk, to<br />

widely use stock options in non-executive employee compensation (Spalt, 2008).<br />

Loss aversion and probability weighting are the key features driving the subjective value in the CPT model.<br />

These parameters have countervailing effects on the modeled subjective value. Depending on which of them is<br />

dominant in the preferences calibration, the model yields different predictions regarding incentives. Specifically,<br />

consistent with some behavioral patterns observed in many surveys and experimental studies on equity-based<br />

compensation, the model predicts that, when the probability weighting feature prevails, the subjective value may<br />

overstate the risk-neutral value of the option. In this case, assuming the company and the employee bargain<br />

efficiently over the compensation components, incentives are maximized for strike prices set around the stock price<br />

at inception for a representative employee with preferences calibration meeting the experimental estimates from<br />

CPT (Tversky and Kahneman, 1992). Obviously, this finding is consistent with companies’ actual compensation<br />

practices. Moreover, executives with such preferences profile may be prompted to act in order to increase share<br />

price volatility. However, when the emphasis is put on the loss-aversion feature, by relaxing the probability<br />

weighting assumption, the model yields results comparable to those of EUT-based models. In particular, the model<br />

predicts that loss-averse employees who are not subject to probability weighting, or even with very low degrees of<br />

probability weighting, receiving options at high exercise price would willingly accept a cut in compensation to<br />

receive instead deep discount options or restricted shares for those of them displaying more loss-aversion.<br />

Despite their practical interest, the conclusions from the results of this research should not be drawn without<br />

underlining some of its limitations. The first one concerns the specification of the CPT model. Actually, although<br />

the reference point specification in the model is consistent with both empirical evidence on people setting reference<br />

points in a dynamic fashion and firms’ widespread use of the BS value as a standard for Financial and Human<br />

Resources disclosers, empirical and experimental literature is still silent on how people set reference points when<br />

assessing complex gambles like stock option payoffs. In addition, to keep the model tractable, only European-style<br />

options were studied in this paper. The model is still however easily extendable to Bermudian-style options using<br />

numerical schemes such like lattice approaches. The other limitation of this study is related to the large<br />

heterogeneity in probability weighting that may exist across individuals (Wu and Gonzalez, 1996). The Tversky’s<br />

and Kahneman’s (1992) weighting function used in the model provides only a fit to the median profile.<br />

Finally, this work highlights - as did some previous eminent researches in this field (Dittmann et al., 2008;<br />

Splat, 2008) - a number of future promising research directions in equity-based compensation incorporating CPT<br />

framework. For instance, exploring the ability of CTP to explain the growing use of performance shares plans<br />

instead of stock options in employee compensation would be of great interest for future research. Furthermore, given<br />

that several empirical studies has documented that employee stock option exercise behaviour is also driven by<br />

behavioural factors, a promising research direction incorporating CPT is studying its ability to predict exercise<br />

patterns.<br />

8 See the limit case of λ=1 where risk-aversion is ignored (the blue dashed curve).<br />

507


5 References<br />

Abdellaoui, M. (2000), “Parameter-Free Elicitation of Utility and Probability Weighting Functions”, Management<br />

Science, Vol. 46 No. 11, pp. 1497-1512.<br />

Bleichrodt, H. and Pinto, J.L. (2000), “A Parameter-Free Elicitation of the Probability Weighting Function in<br />

Medical Decision Analysis”, Management Science, Vol. 46 No. 11, pp. 1485-1496.<br />

Camerer, C. and Ho, T.H. (1994), “Violations of Betweenness Axiom and Nonlinearity in Probability”, Journal of<br />

Risk and Uncertainty, Vol.8 No. 2, pp. 167-196.<br />

Devers, C., Wiseman, R. and Holmes, M. (2007), “The effects of endowment and loss aversion in managerial stock<br />

option valuation”, Academy of Management Journal, Vol. 50 No. 1, pp. 191-208.<br />

Dittmann, I. and Maug, E. (2007), “Lower salaries and no options? On the optimal structure of Executive pay”, The<br />

Journal of Finance, Vol. 62 No. 1, pp. 303-343.<br />

Hall, B. and Murphy, K. (2000), “Optimal Exercise prices for executive stock options”, American Economic Review,<br />

Vol. 90 No. 2, pp. 209-214.<br />

Hall, B. and Murphy, K. (2002), “Stock options for undiversified executives”, Journal of Accounting and<br />

Economics, Vol. 33 No. 2, pp. 3-42.<br />

Hallock, K. and Olson, G. (2006), “The value of stock options to non-executive employees”, Working Paper, No.<br />

11950, National Bureau of Economic Research, Cambridge.<br />

Heath, C., Huddart, S. and Lang, M. (1999), “Psychological factors and stock option exercise”, Quarterly Journal of<br />

Economics, Vol. 114 No. 2, pp. 601-628.<br />

Henderson, V. (2005), The impact of the market portfolio on the valuation, incentives and optimality of executive<br />

stock options, Quantitative Finance, Vol. 5 No. 1, pp. 35-47.<br />

Hodge, F. Rajgopal, S. and Shevlin, T. (2006), “How do managers value stock options and restricted stock ?”,<br />

Working Paper, University of Washington.<br />

Holmstrom, B.R. and Milgrom, P.R. (1987), “Aggregation and linearity in provision of intertemporal incentives”,<br />

Econometrica, Vol. 55 No. 2, pp. 303-328.<br />

Huddart, S. and Lang, M. (1996), “Employee stock option exercises: an empirical analysis”, Journal of Accounting<br />

and Economics, Vol. 21 No. 1, pp. 5-43.<br />

Jensen, M. and Murphy, K.J. (1990), “Performance pay and top-management incentives”, Journal of Political<br />

Economy, Vol. 98 No. 2, pp. 225-264.<br />

Johnson, S.A. and Tian, Y.S. (2000), “The value and incentive effects of non-traditional executive stock option<br />

plans”, Journal of Financial Economics, Vol. 57 No. 1, pp. 3-34.<br />

Kahneman, D. and Tversky, A. (1979), “Prospect Theory: An analysis of decision under risk”, Econometrica, Vol.<br />

47 No. 2, pp. 263-292.<br />

Lambert, R., Larcker, D. and Verrecchia, R. (1991), “Portfolio considerations in valuing Executive Compensation”,<br />

Journal of Accounting Research, Vol. 29 No. 1, pp. 129-149.<br />

Lambert, R. and Larcker, D. (2001), “How do employees value (often incorrectly) their stock options”, available at:<br />

knowledge@wharton.<br />

Sawers, K., Wright, A. and Zamora, V. (2006), “Loss aversion, stock-based compensation and managerial riskseeking<br />

behavior”, paper presented at the AAA 2007 Management Accounting Section Meeting, available at:<br />

http://ssrn.com/ abstract=864224<br />

Thaler, R. (1999), “Mental Accounting Matters”, Journal of Behavioral Decision Making, Vol. 12 No. 3, pp. 183-<br />

206.<br />

Tversky, A. and Kahneman, D. (1992), “Advances in Prospect Theory: Cumulative representation of uncertainty”,<br />

Journal of Risk and Uncertainty, Vol. 5 No. 4, pp. 297-323.<br />

Wu, G. and Gonzalez, R. (1996), “Curvature of the Probability Weighting function”, Management Science, Vol. 42<br />

No. 12, pp. 1676-1690.<br />

508


509


EMERGING MARKETS<br />

510


511


COMOVEMENTS IN THE VOLATILITY OF EMERGING EUROPEAN STOCK MARKETS<br />

Radu Lupu, Institute for Economic Forecasting, Romania<br />

Iulia Lupu, Center for Financial and Monetary Research ”Victor Slavescu”, Romania<br />

Email: radu.a.lupu@gmail.com, iulia.lupu@gmail.com<br />

Abstract. The analysis of the comovements of stock market returns was approached with many modeling techniques ranging from the<br />

simple and GARCH style dynamic conditional correlation to multivariate GARCH and studies of the bivariate distribution. The quest<br />

for the analysis of the now standardized concept of international contagion made room for the employment of all these techniques. Our<br />

paper focuses on the analysis of the comovements in the volatilities of the returns of stock market indices from the most important<br />

developed and emerging European countries, using different forms of computation for different frequencies, starting from intra-day 5minute<br />

returns to weekly returns (data used from Bloomberg). After a brief characterization of the distribution of returns and a<br />

reconfirmation of the stylized facts for the European emerging markets we focus on the clustering effect of volatilities, in the attempt<br />

to identify the moments when a new cluster is formed, i.e. when the volatilities change their size (from small to big or from big to<br />

small). The analysis of these events for the respective countries intends to reveal the mechanism of international information<br />

transmission. The paper also fits a jump-diffusion process, along the lines of Maheu and McCurdy (2007) adjusted for the series of<br />

volatilities, where the Poisson process characterizes the time until a change in the volatility cluster occurs.<br />

Keywords: comovement, returns’ volatility, European emerging stock markets<br />

JEL classification:C39, G15<br />

1 Introduction<br />

Even if the process of financial globalization has not followed a linear trend over time, and the exact timing of<br />

financial liberalization remains somewhat controversial, there is a broad consensus among economists that capital<br />

markets are much moreintegrated today than they were 30 years ago.<br />

Over the years, many papers have contributed to the very important debate onthe interaction across international<br />

stock markets, looking at volatility spillovers, correlation breakdowns, trends in correlation patterns. The integration<br />

of the financial markets wore down muchof the gains from international diversification which rely on low<br />

correlations across internationalstock markets. The intensity of the comovements and spillovers effects driven by the<br />

financial integration may increase the risk of global financial instability.<br />

Most studies into comovements in stock markets have focused on developed economies. Lately there has been a<br />

growing body of empirical research on emerging capital markets, partly in response to the diversifying activities of<br />

multinational enterprises in these markets, and as a result of the growing interest shown by private and large<br />

institutional investors seeking to diversify their portfolios in international capital markets. International differences<br />

in the institutional framework of emerging market economies may playan important role on the magnitude of shocks<br />

transmitted across countries.<br />

When one compares US with European stock markets, that have different trading hours, we can observe the<br />

international transmission mechanism. When New York stock market opens its business day, many things already<br />

happened on the European stock market. Similarly, European brokers take into account how New York market<br />

ended. These equity markets are linked through trade and investment and because of that, any change about<br />

economic fundamentals in one region can have implications for the other one. In the same time, both foreign<br />

exchange markets and national stock markets share a number offacts for which a satisfactory explanation is still<br />

missing in standard theoriesof financial markets.<br />

In this paper the analysis of the comovements of stock market returns was approached with many modeling<br />

techniques ranging from the simple and GARCH style dynamic conditional correlation to multivariate GARCH and<br />

studies of the bivariate distribution. The paper investigates the behavior of stock market indexes in two respective<br />

ways: on one hand, we study the properties of the jumps (outliers) for the high-frequency stock market returns when<br />

we look at many European indexes and, on the other hand, we try to capture the behavior of the co-movement of the<br />

volatilities of these indexes at the same frequency.<br />

512


This paper is organized as follows. In Section 2, is presented the literature review. In Section 3, a dataoverview<br />

and methodology are provided and in Section 4 the empirical results are discussed. Finally, Section 5 concludes.<br />

2 Literature review<br />

Economists have been studyingwhy there is propagation of volatility from one market to another, since<br />

alongtime.Grubel (1968) is a most sited paper representing the start of researches in the field of stock market returns<br />

comovement. Since that, the analysis of the benefit of international portfolio diversification and of the stock market<br />

synchronization received a special attention in international finance.<br />

Bekaert and Harvey (1995), Forbes and Rigobon (1999) are only some important paper that investigate the<br />

cross-country linkages between stock markets. As a matter of fact, a growing body of literature has emerged more<br />

recently on the issue of international stock prices comovement (King et al., (1994), Lin et al. (1994), Longin and<br />

Solnik (1995, 2001), Karolyi and Stulz (1996), Forbes and Rigobon(2002), Brooks and Del Negro (2005, 2006)). In<br />

particular, most of those studies have found that the comovement of stock returnsis not constant over time. Evidence<br />

of increasing international comovement of stock returns since the mid-90s among the major developed countries,<br />

was found by Brooks and Del Negro (2004) and Kizys and Pierdzioch (2009).<br />

The determinants of the cross-country financial interconnections are split in trade intensity factors (Chinn and<br />

Forbes, 2004), financial development factors (Dellas and Hess, 2005), business cycle synchronization factors (Walti,<br />

2005), and geographical variables (Flavin et al., 2002). The divergence of the results and main conclusions can be<br />

partly explained by the high degree of heterogeneity 1 in the empirical approaches adopted by the literature.<br />

However, the comovement analysis should also take into account the distinction between the short and long-term<br />

investor (Candelon et al. (2008)).<br />

The comovement of stock returns was evaluated usually through the correlation coefficient while the evolving<br />

properties have been investigated either through a rolling window correlation coefficient as in the study of Brooks<br />

and Del Negro (2004) or by considering nonoverlapping sample periods like in the paper of King and Wadhwani<br />

(1990) and Lin et al. (1994).Morana andBeltratti (2008) found strong linkagesacross European, the US and the<br />

Pacific Basin stock markets, involving comovements in prices, returnsand volatility over the period 1973–2004.<br />

Their results show that the heterogeneity between Europe and the US has steadily reduced, these markets being<br />

currently strongly integrated.<br />

Applying a new technique, the wavelet analysis, that allow for time and frequency simultaneous<br />

characterization, Rua and Nunes (2009) focused on Germany, Japan, UK and US stock markets over the last four<br />

decades. A noteworthy finding of their paper is that the strength of the comovement ofinternational stock returns<br />

depends on the frequency. The authors argue that the comovement between markets is stronger at thelower<br />

frequencies suggesting that the benefits from international diversification may be relatively less important in the<br />

long-term than in the short-term and that the strength of the comovement in thetime-frequency space varies across<br />

countries as well as across sectors.Taking into account that the United States was the crises epicenter, Didier and all<br />

(2010) analyzed the factors driving the comovement between US and 83 countries stock returns, differentiating the<br />

periods before and after the collapse of Lehman Brother. The authors have argued that there is evidence of a “wakeup<br />

call” or “demonstrationeffect” in the first stage of the crisiswhere investorsbecame aware that certain<br />

vulnerabilities present in the US context could put other economies atrisk, because countrieswith vulnerable banking<br />

and corporate sectors exhibited higher comovement with the US market, the main transmission channel being the<br />

financial one.<br />

Thestock exchanges from Central and Eastern European countriesperform in a quite different manner as<br />

compared to developed markets. First studies that specified these differences are Barry, Peavy III and Rodriguez<br />

(1998), Harvey (1995), Divecha, Drach and Stefek (1992), as well as Bekaert et al. (1998). The literature evidenced<br />

a number of empirical regularities: high volatility, low correlations with developed markets and within emerging<br />

markets, high long-horizon returns, and more variability in the predictability power as compared to the returns of the<br />

1 Heterogeneity is observed in the sample of included countries (developed vs. developing countries), the nature of the econometric approach<br />

(cross-sectional vs. time-series), the measurement of market comovement, and the nature and measurement of explanatory factors.<br />

513


stocks traded in the developed markets. It is also well evidenced that emerging markets are more likely to experience<br />

shocks induced by regulatory changes, exchange rate devaluations, and political crises.<br />

Pajuste (2002) observes that Central and Eastern European capital markets are quite different in terms of their<br />

correlations with European Union capital markets. While the Czech Republic, Hungary and Poland display higher<br />

correlations among them and with the European Union market, Romania and Slovenia show inexistent or even<br />

negative correlation with the European Union capital market. A stock market convergence of Central andEastern<br />

European (CEE) countries to the rest of Europe was studied furthermore by Harrison and Moore (2009), using<br />

threeapproaches to obtain time-varying estimates of thecomovement between returns: realized correlation analysis,<br />

rolling unit root tests, andrecursive cointegration tests.The results suggest that there is arelatively weak correlation<br />

between stock markets in CEE countries and those in Europe, with the link between the exchanges strengthening<br />

since 2002.Analysis in this area are realized as well by Horobet and Lupu (2009) and Lupu and Lupu (2009)<br />

showing the properties of these correlations with different techniques – cointegration and Granger causality tests on<br />

one hand and dynamic conditional correlations performed at the burst of the actual crisis on the other hand.<br />

Harrison, Lupu and Lupu (2010) identified in their paper the statistical properties of the Central and Eastern<br />

European stock market dynamics. The paper focuses on the stock market indices of ten emerging countries from the<br />

Central and Eastern European region – Slovenia, Slovak Republic, Estonia, Latvia, Lithuania, Bulgaria, Czech<br />

Republic, Romania, Hungary and Poland –over the 1994-2006 period and present evidence of stationarity for the<br />

returns of these indices and identified some common characteristics of these markets taken as a whole.<br />

The international transmission of stock returns and volatility was investigated by Lin, Engle and Ito (1994)<br />

usingintradaily data that define Tokyo and New York markets. They argue that information revealed duringthe<br />

trading hours of one market has a globalimpact on the returns of the other market and that the interdependencein<br />

returns and volatilities is generally bi-directional.<br />

As stipulated in the literature (Soydemir (2000)), emerging markets respond morequickly to shocks originating<br />

in their own market than from foreign market disturbances and the emerging market economies thathave opened<br />

their markets to achieve greater financial integration are more prone to externalshocks.<br />

Current research has documented the importance of jump dynamics in combinationwith autoregressive volatility<br />

for modeling returns. Jorion (1988), Andersenet al. (2002), Chib et al. (2002), Eraker et al.(2003), Chernov et al.<br />

(2003), and Maheuand McCurdy (2004) are only a few examples. Jumps provide auseful addition to<br />

stochasticvolatility models by explaining occasional, large abrupt moves in financialmarkets,accounting for<br />

neglected structure, but they are generally not used to capture volatility clustering. Maheu and McCurdy<br />

(2007)proposed a new discrete-time model of returns in which jumps capture persistence in the conditional variance<br />

and higher-order moments and the evaluation focuses on the dynamics of theconditional distribution of returns using<br />

density and variance forecasts. The empirical results indicate that the heterogeneous jump model effectively captures<br />

volatilitypersistence through jump clustering and that the jump-size variance isheteroskedastic and increasing in<br />

volatile markets.<br />

3 Data and methodology<br />

The data that we used consists of five-minute stock market index returns from some of the developed European<br />

markets as well as the Eastern markets: DAX (Germany), CAC (France), UKX (UK), IBEX (Spain), SMI<br />

(Switzerland), FTSEMIB (Italy), PSI20 (Portugal) ISEQ (Ireland), ATX (Austria), WIG (Poland), PX (Czech<br />

Republic), BUX (Hungary), BET (Romania) and SBITOP (Slovenia). The period we took into account was from the<br />

3rd of August 2010 until the 10th of February 2011.<br />

The trading sessions are different in the countries in our analysis (some start at 8:00 hours, local time, others<br />

start at 8:30 and they tend to stop at different moments) and this is why, since we are interested in studying the comovement<br />

of these returns, we had to build a database that identifies the moments in time when all the indexes were<br />

traded. Another issue was that the high frequency returns tend to have a small size and at the turn of the day we may<br />

find higher values for the returns. This is why we decided to take out of the sample the returns that were recorded at<br />

the change of the day (the returns from the value of the index at the end of the day to the value of the index at the<br />

514


eginning of the next day). Therefore, our returns are not presumed to show any jumps (outliers) caused by the<br />

accumulation of information between trading sessions.<br />

.015<br />

.010<br />

.005<br />

.000<br />

-.005<br />

-.010<br />

-.015<br />

-.020<br />

-.025<br />

SBITOP<br />

BET<br />

BUX<br />

PX<br />

WIG<br />

ATX<br />

ISEQ<br />

PSI20<br />

FTSEMIB<br />

SMI<br />

IBEX<br />

UKX<br />

CAC<br />

DAX<br />

Figure 1. The distribution of the five-minute returns: medians, means and outliers<br />

In terms of methodology, we started the analysis with the identification of outliers in sense of finding the<br />

moments when the returns were outside a 95% confidence interval for each of the stock index. Next, we simply<br />

made an analysis of the simultaneity of the jumps for the common sample of all the indexes. The results of this<br />

analysis are provided in the next section.<br />

The following step was to use a Dynamic Conditional Correlation GARCH-like model (DCC) to fit the changes<br />

in the correlations of the returns for the five-minute frequency. The dynamical correlations as well as the GARCH<br />

estimates of the volatilities were then used to characterize the co-movement of the high-frequency returns.<br />

The specification of the DCC model starts from the GARCH-like specification<br />

2<br />

� � � � �R<br />

R<br />

2<br />

� �� ,<br />

which makes the correlations between returns iand j to be:<br />

�<br />

ij,<br />

t�1<br />

�<br />

ij,<br />

t�1<br />

i,<br />

t j,<br />

t ij,<br />

t<br />

� � �R<br />

i,<br />

t<br />

R<br />

j,<br />

t<br />

� ��<br />

2<br />

ij,<br />

t<br />

2 2<br />

2 2<br />

( � � �Ri,<br />

t � �� i,<br />

t )( � � �R<br />

j,<br />

t � �� j,<br />

t<br />

Standardizing each return by its dynamic standard deviation, we get<br />

z<br />

, �1 � i t<br />

R<br />

�<br />

We notice the conditional covariance of news equals the conditional correlation of the raw returns<br />

i,<br />

t�1<br />

i,<br />

t�1<br />

�R R �<br />

��<br />

�<br />

� � �<br />

R ��<br />

R � E<br />

�<br />

E � ��<br />

��<br />

t z z<br />

�<br />

��<br />

��<br />

��<br />

i,<br />

t�1<br />

j,<br />

t�1<br />

t i,<br />

t �1<br />

j,<br />

t�1<br />

ij,<br />

t�1<br />

i,<br />

t�1<br />

j,<br />

t�1<br />

� Et<br />

�<br />

� � , �1<br />

��<br />

��<br />

�<br />

ij t<br />

�<br />

�<br />

i,<br />

t�1<br />

� j,<br />

t�1<br />

� i,<br />

t�1�<br />

j,<br />

t�1<br />

� i,<br />

t�1�<br />

j,<br />

t�1<br />

515


Thus modelling the conditional correlation of the raw returns is equivalent to modelling the conditional<br />

covariance of the standardized returns. We can consider GARCH(1,1) type specifications of the form<br />

�z z � � �� � �q �<br />

q � � ��<br />

� �<br />

ij,<br />

t�<br />

1 ij i,<br />

t j,<br />

t ij ij,<br />

t ij .<br />

In estimating the dynamic conditional correlation models suggested above, we can rely on the quasi maximum<br />

likelihood estimation (QMLE) method.<br />

The next step in our analysis consisted in the calibration of a VAR model on the volatilities of the highfrequency<br />

returns, which were proxied by the squared returns. The specification of the VAR model on volatilities<br />

was constructed to allow for 1 lag dependence among the 14 variables taken into account. The results are provided<br />

in the next section as well.<br />

The last step in our analysis consisted in the calibration of a jump process in the specification of Maheu and<br />

McCurdy (2007). The specification is presented in the following lines.<br />

rt – returns for the period t = 1,…,T<br />

μ – the mean of the returns when there is no rare event (no jump)<br />

Jt – indicates the occurrence of the jump (Jt = 1 means that we will observe a jump at moment t; if Jt = 0 there is<br />

no jump at moment t)<br />

� t – the probability to have a jump at moment t i.e. Pr(Jt = 1)<br />

� t – the size of a possible jump at moment t<br />

� J – the mean for the jump-size variable<br />

2<br />

� J ,t – the variance of the jump-size variable<br />

2<br />

Xt-1 – the absolute value of rt-1 which allows the variance � to be positive.<br />

The estimation of the model was realized through the Markov Chain Monte Carlo using a Gibbs algorithm. In<br />

the analysis realized by Maheu and McCurdy (2007) the vector of parameters θ = { � ,<br />

J ,t<br />

2<br />

� , J<br />

where � ={ 0 �<br />

,<br />

� 1 � �<br />

} and ={ 0 ,<br />

� 1 } is augmented with the unobserved state vectors � ={<br />

� 1,<br />

…, T<br />

times J = {J1, …, JT} and the jump sizes � = { 1 �<br />

, …,<br />

� T }.<br />

� , � , � },<br />

� }, jump<br />

We compute the conditional densities for all the parameters and run 5000 simulations with draws from these<br />

densities by the following algorithm:<br />

1. sample � | θ-μ,i-1 � � i-1, Ji-1, i-1, r<br />

2.<br />

2 2<br />

sample � | θ-�<br />

i-1, � �<br />

i-1, Ji-1, i-1, r<br />

516


�<br />

3. sample J �<br />

| θ- J i-1 ,� � i-1, Ji-1, i-1, r<br />

4. sample � | θ- � i-1,� i-1, Ji-1, � i-1, r<br />

( , � )<br />

5. sample blocks<br />

� t �( t, � ) |θ- i-1,� � i-1, Ji-1, i-1, r<br />

6. sample � |θ- � i-1,� i, Ji-1, � i-1, r<br />

7. sample � | θ i, � i, J i-1, r<br />

8. sample J | θ i, � i, � i, r<br />

9. goto 1<br />

where r = {r1, …, rT} and i is the iteration number.<br />

4 Results<br />

In terms of co-movements, the most interesting issues observed in our analysis deal with the fact that, in case we<br />

take into account only the values that are outside a very conventional 95% confidence interval (two sigmas away<br />

from the mean in both directions) we notice that for this high frequency data these events are happening usually<br />

simultaneously for all the indexes in our database. In case we consider these to be jumps, then our dynamics show a<br />

close co-movement of the stock markets in Europe.<br />

On average, we found that, out of the 14 indexes that we took into account, for a 5-minute frequency, we have<br />

about 10.15 of them happening in the same time. We consider this to be a proof of co-movement, showing the fact<br />

that usually, when unusual returns are realized in the stock markets, they have the tendency to be realized in the<br />

mean time for all the stock markets, showing that the information is moving in a fast manner around Europe and also<br />

showing that the large movements are probably the cause of important information affecting the whole environment,<br />

since many of them are reacting in a five-minute interval.<br />

In order to produce a better characterization of the structure of the outliers (jumps) and, in the same time, reveal<br />

the properties of the co-movements, we organized the data to compute the number of situations in which we had<br />

simultaneous outliers out of the whole sample of indexes taken into account.<br />

1 2 3 4 5 6 7 8 9 10 11 12 13 14<br />

52.43<br />

%<br />

6.60% 4.86% 4.17% 4.17% 4.17% 1.39% 4.86% 3.82% 3.13% 2.78% 1.04% 5.90% 0.69%<br />

-8.15E- -1.02E- 1.80E- -1.39E- 1.47E- 2.68E- -2.37E- -2.50E- 1.23E- 3.61E- 7.20E- -1.08E- 4.45E- 2.79E-<br />

05 03 03 03 03 03 03 03 03 03 03 02 04 03<br />

Table1: The percentage of simultaneous outliers out of all the situations when we experienced outliers in the sample<br />

of our stock market index returns<br />

As we can notice from the above table, out of a number of 288 moments when outliers were recorded (from a<br />

sample of 2413 records at the five-minute frequency), a high number (52.43%) happened in isolated situations, i.e.<br />

only one stock index experienced the outlier, but we notice that almost half of the outliers were experienced in at<br />

least 2 situations, with a lot of them happening in 8 and mostly 13 stock markets in the same time. Important<br />

evidence is also the fact that, when taking into account the mean values of the outliers, the ones that are isolated<br />

(only one index exhibiting a jump) are the smallest, while for the others we see higher absolute values. This can be<br />

considered to be a proof of the fact that usually the big jumps tend to spread, while the ones that are not so big tend<br />

to be local and, on the other hand, this difference in size also might explain the bigger number of isolated jumps,<br />

since a smaller size means that they are closer to the mean and, hence, they tend to have a higher frequency (higher<br />

rate of realization).<br />

517


1 2 3 4 5 6 7 8 9 10 11 12 13 14<br />

Germany 3 11 21 8 25 58 75 71 91 89 88 100 100 100<br />

France 2 11 14 8 25 50 75 71 91 100 100 100 100 100<br />

UK 2 0 21 33 50 58 50 57 91 89 75 100 100 100<br />

Spain 2 11 14 25 25 25 75 71 73 67 75 100 100 100<br />

Switzerland 3 32 14 33 33 50 75 64 73 89 88 100 88 100<br />

Italy 1 21 21 33 42 8 75 64 64 78 88 100 100 100<br />

Portugal 5 11 21 17 17 42 50 50 64 67 100 100 100 100<br />

Ireland 1 16 43 50 25 33 50 50 4 67 50 67 94 100<br />

Austria 2 5 7 33 67 50 50 57 55 89 88 67 100 100<br />

Poland 7 0 36 42 42 58 50 50 64 56 75 67 100 100<br />

Czech<br />

Republic 3 21 7 17 50 50 75 50 73 89 88 100 100 100<br />

Hungary 3 21 29 50 42 42 0 43 45 56 100 100 100 100<br />

Romania 12 11 36 25 17 33 0 71 27 44 75 100 88 100<br />

Slovenia 54 32 14 25 42 42 0 29 27 22 13 0 29 100<br />

Table 2:The frequency of outliers for each stock market index (in percentage)<br />

The previous table shows the proportion of outliers in which each of indexes was involved out of the number of<br />

all the outliers identified in the sample. Hence, we can say that, out of all the situations in which we had only one<br />

outlier in the whole sample, which was not accompanied in the same moment by another outlier, there were only 3<br />

of these situations in which Germany was involved (in only 3 of the cases Germany had an isolated outlier).<br />

Accordingly, out of all the situations in which we had 7 simultaneous outliers, we notice that Poland had an outlier<br />

in 50 of the cases.<br />

The countries that tend to have isolated outliers were those countries for which we see big numbers in the left<br />

side of the table and relatively small numbers in the right side of the table. Slovenia is the only one with such a<br />

situation. However, the vast majority of the countries tend to show a higher proportion of outliers that were<br />

happening in the same time. This is important evidence on stable co-movement of the European stock markets,<br />

showing that relevant information has the power to provide important shocks at a regional level.<br />

The next step was to compute dynamic conditional correlations using the GARCH specification mentioned in<br />

the previous section. The means of the correlations for each pair are produced in the following table.<br />

Germany<br />

France 0.86<br />

Germany France UK Spain Switzerland Italy Portugal<br />

UK 0.78 0.79<br />

Spain 0.70 0.77 0.68<br />

Switzerland 0.74 0.72 0.70 0.64<br />

Italy 0.75 0.81 0.71 0.78 0.66<br />

Portugal 0.50 0.54 0.51 0.57 0.49 0.54<br />

Ireland 0.34 0.34 0.33 0.29 0.29 0.31 0.27<br />

Austria 0.44 0.42 0.43 0.40 0.41 0.43 0.38<br />

Poland 0.50 0.45 0.45 0.40 0.40 0.41 0.35<br />

Czech 0.41 0.40 0.39 0.35 0.34 0.37 0.34<br />

518


Germany<br />

France<br />

UK<br />

Spain<br />

Republic<br />

Switzerland<br />

Italy<br />

Portugal<br />

Ireland<br />

Hungary 0.31 0.29 0.31 0.24 0.27 0.25 0.26<br />

Romania 0.10 0.00 0.10 -0.70 0.07 0.08 0.04<br />

Slovenia 0.00 0.00 0.01 0.01 0.03 -0.01 -0.02<br />

Austria 0.26<br />

Ireland Austria Poland<br />

Poland 0.18 0.33<br />

Czech<br />

Republic 0.22 0.48 0.33<br />

Hungary 0.18 0.24 0.28 0.27<br />

Romania 0.04 0.14 0.05 0.07 0.08<br />

Czech<br />

Republic Hungary Romania Slovenia<br />

Slovenia -0.02 0.02 0.02 0.02 -0.03 -0.01<br />

Table 3:The mean values of the correlations computed with the DCC model for five-minute returns<br />

We can notice that the higher values of the correlations are present in the upper part of the table, which means<br />

that the index returns tend to be correlated mostly among the developed countries and less correlated with the<br />

Eastern European ones. The latter also show that they are not so much correlated neither among themselves, which<br />

is evidence of their particular independence on the Western European stock markets.<br />

The next analysis in our work dealt with the construction of a VAR for the volatilities of the stock index returns.<br />

Due to lack of space, the final results of the estimation are not provided here but we mention that we estimated a<br />

VAR up to the second lag for all the series in our database. Only very few coefficients proved to be significant:<br />

DAX, CAC, UKX, SMI and FTSEMIB seem to be dependent on the first lag of WIG, PSI20 is dependent on the<br />

second lag of DAX and the second lag of WIG, ATX is dependent on the first lag of SMI, WIG is dependent on the<br />

first lag of DAX and BUX is dependent on the second lag of FTSEMIB and its first lag.<br />

The last part of our analysis consisted in the calibration of the Maheu and McCurdy (2007). Our aim was to find<br />

the coefficients for each of the returns in our sample, and then to analyze the parameters of the jump densities by<br />

comparing them among the returns in our database. Thus, the jump densities were introduced into another VAR<br />

analysis and we tried to see if the densities are providing any kind of dependence. We are not providing here the<br />

results, but we mention that the estimation did not provide any significant parameters.<br />

5 Conclusions<br />

This paper uses high frequency stock market index returns to check for the co-movements in the European region.<br />

Using 14 indexes for a period of about one year, the common dynamics of the five-minute returns were analyzed<br />

using many tools: the properties of the distributions of each series, the jumps defined as outliers of the 95%<br />

519


confidence interval and their simultaneity, the dynamics of the correlations, the relationships among volatilities and<br />

then the relationships among the densities for a jump-diffusion model.<br />

Our study revealed the fact that there is significant dependence of the returns on the movement of the other<br />

returns in the sample, especially when we took into account the extreme values. These “jumps” tend to be<br />

simultaneous at the regional level and they are evidence of the fact that the information is spread fast at the<br />

European capital markets.<br />

6 Acknowledgements<br />

This research is part of the CNCSIS Young Research Teams type research project with the title “Bayesian<br />

Estimation of the Structural Macroeconomic and Financial Relationships in Romania: Implications for Asset<br />

Pricing”, contract number 25/5.08.2010, director Dr. PetreCaraiani, Institute for Economic Forecasting, Romanian<br />

Academy.<br />

7 References<br />

Andersen, T. G., Benzoni, L. & Lund, J. (2002).An empirical investigation ofcontinuous-time equity return<br />

models.Journal of Finance, no. 62, pp. 1239–1284.<br />

Barry, C. B., Peavy III, J. W. & Rodriguez, M. (1998).Performance Characteristics of Emerging Capital<br />

Markets.Financial Analysts Journal, vol. 54, no. 1, pp. 72-80.<br />

Beine, M. &Candelon, B. (2011).Liberalisation and stock market co-movement betweenemerging economies.<br />

Quantitative Finance. Vol. 11, no. 2, pp. 299-312.<br />

Bekaert, G., Erb, C. B., Harvey, C. R. &Viskanta, T.E. (1998). Distributional Characteristics of Emerging Market<br />

Returns and Allocation.Journal of Portfolio Management, vol. 24, no. 2, pp. 102-116.<br />

Brooks, R., Del Negro, M. (2004). The rise in comovement across national stock markets: market integration or IT<br />

bubble?.Journal of Empirical Finance, no. 11, pp.659–680.<br />

Brooks, R., Del Negro, M. (2005). Country versus region effects in international stock returns. Journal of Portfolio<br />

Management, pp. 67–72,Summer, 2005.<br />

Brooks, R., Del Negro, M. (2006).Firm-level evidence on international stock market comovement.Review of<br />

Finance, no. 10, pp. 69–98.<br />

Candelon, B., Piplack, J., Straetmans, S. (2008). On measuring synchronization of bulls and bears: the case of East<br />

Asia.Journal of Banking and Finance, no. 32, pp. 1022–1035.<br />

Chernov, M., Gallant, R. A., Ghysels, E. &Tauchen, G. (2003).Alternative models forstock price dynamics. Journal<br />

of Econometrics, no. 116, pp. 225–257.<br />

Chib, S., Nardari, F. &Shephard, N. (2002).Markov chain Monte Carlo methods for stochastic volatility models,<br />

Journal of Econometrics, no. 108, pp. 281–316.<br />

Didier, T., Love, I., &Peria, M. S. M. (2010).What Explains Stock Markets’ Vulnerabilityto the 2007–2008<br />

Crisis?.The World Bank Policy Research Working Paper 5224<br />

Divecha, A.B., Drach, J. &Stefek, D.(1992).Emerging Markets: A Quantitative Perspective.Journal of Portfolio<br />

Management, 19, 1, pp. 41-50.<br />

Eraker, B., Johannes, M. S. & Polson, N. G. (2003).The impact of jumps in volatilityand returns, Journal of<br />

Finance, vol. 58, no. 3, pp. 1269–1300.<br />

Forbes, K., Rigobon, R.(2002). No contagion, only interdependence: measuring stock market comovements. Journal<br />

of Finance, no. 57, pp. 2223–2261.<br />

Grubel, H.(1968). Internationally diversified portfolios: welfare gains and capital flows.American Economic Review,<br />

vol. 58, no. 5, pp. 01299–1314.<br />

520


Harrison B., Lupu, R. &Lupu, I. (2010), Statistical Properties of the CEE Stock Market Dynamics.A Panel Data<br />

Analysis.The Romanian Economic Journal, Year XIII, no. 37, pp. 41-54.<br />

Harrison, B. & Moore, W. (2009).Stock Market Comovement in the European Union andTransition<br />

Countries.Financial Studies, vol. 13, no. 3, pp. 124-151.<br />

Harvey, C.R (1995). Predictable Risk and Returns in Emerging Markets.Review of Financial Studies, vol. 8, no. 3,<br />

pp. 773-816.<br />

Horobet, A. &Lupu, R. (2009). Are Capital Markets Integrated? A Test of Information Transmission within the<br />

European Union.Romanian Journal of Forecasting, no.2, Vol. 10.<br />

Jorion, P. (1988). On jump processes in the foreign exchange and stock markets.Review of Financial Studies, vol. 1,<br />

no. 4, pp. 427–445.<br />

Karolyi, G.A., Stulz, R.M. (1996). Why do markets move together? An investigation of U.S.–Japan stock return<br />

comovements.Journal of Finance, vol. 51, no. 3, pp. 951–986.<br />

King, M., Sentana, E., Sushil, W. (1994). Volatility and links between national stock markets. Econometrica, vol.<br />

62, no. 4, pp. 901–933.<br />

King, M., Wadhwani, S. (1990).Transmission of volatility between stock markets.Review of Financial Studies, vol.<br />

3, no. 1, pp. 5–33.<br />

Kizys, R., Pierdzioch, C.(2009). Changes in the international comovement of stock returns and asymmetric<br />

macroeconomic shocks. Journal of International Financial Markets, Institutions and Money, vol. 19, no. 2, pp.<br />

289–305.<br />

Lin, W.-L., Engle, R. F. & Ito, T. (1994).Do Bulls and Bears Move Across Borders? International Transmission of<br />

Stock Returns andVolatility, The Review of Financial Studies, vol. 7, no. 3, pp. 507-538.<br />

Lin,W.-L., Engle, R., Ito, T. (1994). Do bulls and bears move across borders? International transmission of stock<br />

returns and volatility.Review of Financial Studies, vol. 7, no. 3, pp. 507–538.<br />

Longin, F., Solnik, B. (1995). Is the correlation in international equity returns constant: 1960–1990?.Journal of<br />

International Money and Finance, vo. 14.no. 1, pp. 3–26.<br />

Longin, F., Solnik, B. (2001).Extreme correlation of international equity markets.Journal of Finance,vol. 56, no. 2,<br />

pp. 649–676.<br />

Lupu, R. &Lupu, I. (2009). Contagion across Central and Eastern European Stock Markets: A Dynamic Conditional<br />

Correlation Test.Economic Computation and Economic Cybernetics Studies and Research, Vol. 43, No. 4, pp.<br />

173-186.<br />

Maheu, J. M. & McCurdy, T. H. (2004). News arrival, jump dynamics, and volatilitycomponents for individual<br />

stock returns.Journal of Finance, vol. 59, no. 2.<br />

Maheu, J. M. & McCurdy, T. H. (2007).Modeling foreign exchange rates with jumps.Working Papers tecipa-279,<br />

University of Toronto, Department of Economics.<br />

Morana, C. &Beltratti, A. (2008).Comovements in International Stock Markets.Journal of International Financial<br />

Markets, Institutions and Money, vol. 18, no. 1, pp. 31-4.<br />

Pajuste, A. (2002). Corporate Governance and Stock Market Performance in Central and Eastern Europe: A Study of<br />

Nine Countries’,Stockholm School of Economics. Working paper, Available at SSRN:<br />

http://ssrn.com/abstract=310419 or doi:10.2139/ssrn.310419<br />

Rua, A. &Nunes, l. C. (2009). International comovement of stock market returns: A wavelet analysis. Journal of<br />

Empirical Finance, no. 16, pp. 632–639.<br />

Soydemir, G. (2000). International Transmission Mechanismof Stock Market Movements: Evidencefrom Emerging<br />

Equity Markets. Journal of Forecasting, no. 19, pp. 149-176.<br />

521


OWNERSHIP STRUCTURE, CASH CONSTRAINTS AND INVESTMENT BEHAVIOUR IN RUSSIAN<br />

FIRMS 1<br />

Tullio Buccellato (a) , Gian Fazio (b) and Yulia Rodionova (c)<br />

February 2011<br />

30 April 2010<br />

Abstract. In this paper, we investigate to what extent Russian firms are liquidity constrained in their<br />

investment behaviour and how ownership structure changes the relationship between internal funds and<br />

investment decisions of these firms. We estimate a structural financial accelerator model of investment and<br />

first test the hypothesis that Russian firms are cash constrained. We conduct random effects estimation on a<br />

large representative panel dataset of 8637 firms in the European part of Russia, using their balance sheet<br />

information over the period 2000-2004. Our results confirm that firms are liquidity constrained when the<br />

ownership structure is omitted from the econometric specifications. With regards to the ownership structure<br />

and degree of concentration, we find that state owned companies and private individuals/families are less<br />

cash constrained, independently of whether their ownership structure is concentrated. No significant impact<br />

is found for banks and financial companies and institutions.<br />

JEL codes: F1, F2, G24, G32, 03.<br />

Keywords: entrepreneurial firm, emerging market, financing choices, sales growth, nnovation, manager's<br />

characteristics.<br />

(a) Dipartimento di Economia e Istituzioni, Univesita’ di Roma Tor Vergata.<br />

(b) School of Slavonic and East European Studies, University College London.<br />

(c) Department of Accounting and Finance, Leicester Business School, De Montfort University<br />

1 We are grateful to the participants of the 10th EACES Conference in Moscow, CICM conference at London<br />

Metropolitan University, research seminar at Leicester Business School at DMU, and to Panagiotis Andrikopoulos,<br />

Ashley Carreras, Tomila Lankina, Fred Mear,and to Pasquale Scaramozzino for helpful comments and suggestions<br />

522


MORTGAGE HOUSING CREDITING IN RUSSIA: CRISIS OVERCOMING AND FURTHER<br />

DEVELOPMENT PERSPECTIVES<br />

Liudmila Guzikova, Saint Petersburg State Polytechnic University<br />

E-mail: guzikova@mail.ru<br />

Abstract. Russian system of mortgage housing crediting (SMHC) has been created on the basis of the American two-level model<br />

uniting the mortgage loan market and the mortgage based securities market. In the period till 2008 Russian SMHC developed actively.<br />

In the course of its development the important role was played by factors, characteristic for emerging economy. Especially should be<br />

noticed that social expectations connected with SMHC were initially overestimated. Financial and economic crisis has drawn attention<br />

to national SMHC and to revealing the ways and perspectives of its further development. Overcoming of the crisis consequences and<br />

further SMHC functioning demand to work out the concept of SMHC operated complex balanced development. The basis of this<br />

concept is the system of development principles realized in the offered balanced dynamic model, coordinating social expectations,<br />

solvent demand, real estate supply, investment potential of financial institutions and population and capacity of mortgage securities<br />

market, and in the uniform system of SMHC performance indicators.<br />

Keywords: Mortgage, Mortgage Market<br />

JEL classification: G210<br />

1 Introduction<br />

The national system of mortgage housing crediting (SMHC) in Russia has been created on the basis of the American<br />

two-level model uniting mortgage loan market and mortgage securities market. By this time the circle of its<br />

participants had been determined, the standard-legal base regulating their mutual relations created, the market<br />

infrastructure generated.<br />

SMHC realizes only one of the ways of housing maintenance possible in the market economy. However, at the<br />

governmental level SMHC is considered to be the main instrument of dealing with the large scale and extremely<br />

sharp housing problem. Such point of view is reflected in the government programs providing grants to separate<br />

categories of the population for the purpose of their involvement in SMHC.<br />

Financial and economic crisis has aggravated problems and disproportions of the national SMHC and has drawn<br />

attention to its condition and further development perspectives.<br />

2 Drawing lessons from the crisis and substantiation of SMHC postcrisis development model<br />

2.1 Pre-crisis development and mechanisms of Russian SMHC crisis<br />

Since 2005 volumes of mortgage housing crediting have grown high rates (figure 1), however, the volume of debt<br />

under mortgage housing credits did not exceed 2.6 % with respect to GNP (for comparison: in pre-crisis 2006 in the<br />

USA the volume of mortgage housing credits made up 76 % with respect to GNP, in Denmark – 101 %, in Great<br />

Britain – 83 %).<br />

Figure 1. Debt volume on housing mortgage loans in dynamics, million rbl.<br />

523


Since 2007 the financial and economic crisis which has arisen in the market of mortgage lending has started in<br />

the USA. Due to globalization of economy and the financial markets virtually all economically developed and<br />

developing countries, including Russia, were involved in the crisis.<br />

During the pre-crisis period processes and tendencies similar to those on American mortgage market took place<br />

in Russia:<br />

� agiotage rise of real estate prices;<br />

� aspiration to use housing as an investment asset (Minz, 2007);<br />

� liberalization of banks requirements to borrower quality, caused by their aspiration to strengthen position in the<br />

new market (Deljagin, 2008).<br />

However, the national SMHC hasn't played an appreciable role in the development of non-stationary processes<br />

in the Russian economy. Due to small scale and backwardness of its separate elements, in particular, the mortgage<br />

securities market, the SMHC became rather a victim of the crisis, than its active conductor. We notice that the<br />

maximum annual volume of the mortgage loans was reached in 2008 and enabled to buy no more than 10.5 % of the<br />

housing constructed the same year at the current prices of that period and taking into account an initial payment of<br />

30 % of housing cost. The fall of the Russian mortgage market took place in 2009.<br />

2.2 Features of Russian SMCH development<br />

The major factors determining the character of the national SMCH development are the low incomes of the most<br />

population and high regional and social differentiation of the population by incomes.Now for a family consisting of<br />

two persons with their incomes equal to the bottom of the most highly remunerative cluster (monthly income of<br />

25,000 rbl. per person), indicated by the state statistics, only the limited set of credit conditions is accessible: interest<br />

rates not above 12 % for long crediting terms over 15 years. The volume of this cluster makes up no more than 10 %<br />

of the whole population.<br />

Drawing on the statistical data for 81 regions of Russia on 10/1/2010 the estimation of correlations between the<br />

characteristics reflecting SMHC conditions (table 1) was conducted.<br />

Parameters analyzed Correlation factor<br />

Per capita housing and housing construction volume 0.2654<br />

Per capita housing and income 0.1878<br />

Per capita housing construction volume and income 0.1482<br />

Housing price in the secondary market and per capita income 0.7880<br />

Housing price in the primary market and per capita income 0.7436<br />

<strong>Volume</strong> of housing construction and population 0.3108<br />

Population and total volume of mortgage loan debt 0.8763<br />

Urban population and total volume of mortgage loan debt 0.9414<br />

Agricultural population and total volume of mortgage loan debt 0.1337<br />

Per capita housing and volume of mortgage loan debt 0.0042<br />

<strong>Volume</strong> of housing construction and total volume of mortgage loan debt 0.2321<br />

Per capita volume of mortgage loan debt and income 0.5480<br />

Table 1: Correlation between regional characteristics of the population, housing construction and mortgage debt<br />

From the analysis results it is possible to draw the following conclusions:<br />

� the building branch as a whole is not focused on the housing problem solution, which is shown in low<br />

correlations of housing construction volume with the level of housing for population (social aspect) and with the<br />

existing income level (commercial aspect);<br />

524


� the significance of incomes of the population in providing housing is rather small, the existing level of housing<br />

was not reached by market methods;<br />

� price level in the housing market significantly, but not absolutely, positively correlates with the income level;<br />

� the volume of the mortgage loans significantly positively correlates with the population quantity, with the<br />

overwhelming majority of mortgage borrowers being city dwellers;<br />

� The volume of the mortgage loans is not connected with housing level and has low positive correlation with the<br />

income level.<br />

Correlation analysis between the indicators of incomes and housing level, the prices of the housing market and<br />

volumes of the general and past-due debts on mortgage housing credit (table 2) was carried out drawing on the data<br />

concerning 15 regions, which were leaders on the volumes of the past-due debts on the mortgage loans given out in<br />

roubles and foreign currency. The results of this analysis can be summarized as follows:<br />

1. The level of housing expensiveness, presented as the ratio of square meter cost to per capita income, has<br />

low negative correlation with the volume of the past-due debt on mortgage loans in roubles and is virtually<br />

not connected with the volume of the past-due debt in currency;<br />

2. Per capita general mortgage debt on loans given out in roubles has low negative correlation with the per<br />

capita volume of past-due mortgage loan debt in roubles;<br />

3. Per capita general mortgage debt on loans given out in foreign currency has strong positive correlation with<br />

per capita volume of past-due mortgage loan debt in currency.<br />

Correlation factor<br />

Parameters analyzed<br />

for credits in for credits in<br />

roubles currency<br />

Level of housing expensiveness and volume of past-due mortgage debt -0.3213 0.0037<br />

Per capita mortgage debt and past-due mortgage debt -0.1635 0.9428<br />

Per city dweller mortgage debt and past-due mortgage debt -0.0937 0.9388<br />

Table 2: Correlation between characteristics of population, housing market and mortgage loan market of in the regions having the highest level<br />

of past-due mortgage debt<br />

Thus, it is possible to ascertain the presence of disproportions between housing construction volumes, level of<br />

incomes, housing and mortgage crediting features. The working out and consecutive realization of the SMHC<br />

operated complex balanced development concept are demanded for the elimination of the revealed disproportions.<br />

2.3 Current SMHC conditions<br />

Post-crisis national SMHC restoration occurs fairly quickly. In 2010 the Russian banks provided mortgage loans for<br />

the sum of 378.9 billion rbl., that is 2.5 times more than the volume of the mortgage loans given out in 2009 (152.5<br />

billion rbl.) but 42 % less, than in 2008 (655.8 billion rbl.). In total, the borrowers received about 298 thousand<br />

mortgage loans in 2010 that is only 10 % less in comparison with the pre-crisis indicators. The emergence of the<br />

new SMHC participants on the market became a sign of overcoming the crisis: in 2010 there were 628 banks in the<br />

market of mortgage credits, whereas in 2009 there were 584, while in 2008 there were 602 operating banks. The<br />

share of the past-due debt in total amount of mortgage housing credits was 3.61 % in 2010 confirming rather<br />

rigorous bank requirements to mortgage borrowers and, partly, the effectiveness of measures on re-structuring of the<br />

past-due mortgage debt.<br />

The basic tendencies of post-crisis SMHC development are the following:<br />

1. increase of the banks number giving out mortgage housing credits. There were 523 SMHC participants<br />

giving out credits by the end of 2010, with 105 participants serving the credits given out earlier.<br />

2. further growth of the absolute and relative indicators characterizing the volumes of mortgage housing<br />

credits given out. In 2010, the banks gave out 301433 mortgage housing credits for a total sum of 380.1<br />

billion rbl., making up 10.4 % of the total consumer credit amount. The average credit size has slightly<br />

increased and made up 1.25 million rbl. The quantity of the mortgage loans given out has increased 2.3<br />

times in comparison with the previous year;<br />

525


3. growth of the mortgage housing credit share given out in national currency (roubles). In 2010 the number<br />

of rouble credits equaled 298213 for the total sum of 354.6 billion rbl., vs 3220 credits given out in foreign<br />

currency. In comparison with 2009, the share of rouble credits increased by 2.2 % and made up 95.9 %;<br />

4. further decrease of credit interest rates and liberalization of credit granting conditions. The average interest<br />

rate on rouble credits decreased by 1.2 %, on currency credits by 1.7 % and made up 13.1 % and 11 %<br />

accordingly on 1/1/2011. The average terms of the rouble credits given out were reduced by 1 month and<br />

made up 196 months, whereas on credits in foreign currency the terms grew up by 15 months, constituting<br />

155 months;<br />

5. steady tendency of increase of the past-due debt volume and its share in total mortgage debt amount. In<br />

comparison with 2009, the past-due debt in rouble credits increased by 27.2 %, and those in foreign<br />

currency credits – by 44.5 %. The share of the past-due debt in total amount made up 2.5 % and 3.7 %<br />

accordingly. Past-due debt with the delay over 180 days constitutes the greatest share in the total volume of<br />

past-due debt.<br />

In 2010 the total volume of refinancing of mortgage loans made up 64.6 billion rbl. that corresponds 17 % of<br />

the total volume of the mortgage loans given out that year, but the further issue of mortgage securities was not<br />

supposed. For comparison we notice that in 2009 the refinancing volume made up 65.4 billion rbl. and was equal to<br />

43 % of total volume.<br />

Crediting condition liberalization at the time when real incomes decrease reflects the banks aspiration to<br />

increase mortgage lending volume and represents the dangerous tendency, capable to accelerate past-due debt<br />

growth. The government has developed past-due debt re-structuring programs, which are realized by Mortgage<br />

Housing Credits Re-structuring Agency (MHCRA), and continues granting some groups of population for<br />

participation in SMHC. It shows that the lessons of the crisis are not adopted properly.<br />

2.4 Principles of the SMHC development<br />

For providing steady SMHC functioning it is necessary to work out principles constituting the basis of the SMHC<br />

operated complex balanced development concept. These principles should be adopted by all SMHC participants.<br />

Principles of the national SMHC development are united in four groups:<br />

1. development management principles;<br />

2. balanced development principles;<br />

3. development dynamics principles;<br />

4. complex development principles.<br />

2.4.1 Development management principles<br />

Mortgage market crisis development and steps made for its overcoming in the USA, Western European countries<br />

and Russia confirmed the necessity of the SMHC governmental management. The government should carry out the<br />

following functions:<br />

1. ideological function. The government should realize this function working out the general concept of the<br />

national SMHC development. There is the project called «Long-term strategy of mortgage lending<br />

development in the Russian Federation» in Russia. The objectives of the specified strategy are the<br />

following:<br />

� carrying out the common position of all mortgage market participants on long-term perspectives, benchmarks,<br />

principles of the mortgage market and other forms of housing financing development;<br />

� determination of the main long-term objectives and government policy tasks on mortgage market and other<br />

forms of housing financing development for the period till 2030;<br />

� determination of the basic measures and actions for tasks decision, including priority strategic actions, in view<br />

on strategic objectives achievement;<br />

2. legislative function. The national SMHC creation became possible in Russia after adoption of the Federal<br />

laws № 2972-1 from 5/29/1992 «Concerning collateral» and №102 from 7/16/1998 «Concerning mortgage<br />

(real estate collateral)». In the framework of the legislative function the government should adopt acts<br />

providing strategic SMHC development perspective.<br />

526


3. standard-regulating function. In the framework of this function the government carries out tactical and<br />

operational SMHC administration and provides flexible response to circumstances and operating changes<br />

generated by external and internal factors;<br />

4. plan-indicative function. This function should be realized by formation of the indicator system reflecting<br />

the SMHC development, and by working out forecasts and indicative plans;<br />

5. control-analytical function. The government should realize this function to reveal the SMHC development<br />

negative tendencies timely, to analyze the reasons of deviations from indicative plans and to carry out<br />

necessary correcting measures. Central bank of the Russian Federation should also provide this function for<br />

the banks participating in the SMHC;<br />

6. information-analytical function. Market mechanism functioning is most effective under the information<br />

transparency and publicity. Within the SMHC framework information-analytical function can be carried<br />

out by the independent analytical companies and the self-adjustable organizations;<br />

7. protection of the rights and legitimate interests of the SMHC participants. This function can be realized by<br />

working out and adoption the legislative and regulatory acts concerning to the SMHC participants conflicts<br />

and by their executive mechanism creation. The SMHC represents an open system as its participants are<br />

free in decision-making to participate its functioning or not. Entrance and exit barriers establishment is one<br />

of the ways of the SMHC reliability maintenance. Corresponding requirements working out and their<br />

observance control can be assigned to the self-adjustable organizations within the SMHC (for example,<br />

Association of the Russian banks – ARB, National association of the mortgage market participants –<br />

NAMMP).<br />

In the SMHC development management the following principles should be realized:<br />

� functional importance of the government as the SMHC management subject;<br />

� objective management;<br />

� combination of strategic, tactical and operational administration;<br />

� scientific validity of all level administrative decisions;<br />

� competence of administrative decisions;<br />

� management flexibility and alternatives consideration;<br />

� centralized management and self-regulation combination;<br />

� portfolio management approach.<br />

2.4.2 Balanced development principles<br />

The government should provide the balanced social and economic development. The SMHC development objectives<br />

should fit the general social and economic development objectives, and the aims of the separate SMHC participants<br />

categories should be coordinated. The priority of the social purposes and corresponding results is characteristic for<br />

the government at federal and regional levels. Commercial objectives are priority for individual SMHC participants,<br />

for example, creditor banks and insurers.<br />

The government should support balance of the SMHC and other social and economic national projects, of the<br />

SMHC and other housing providing systems (Polterovich, 2007). The SMHC should also act as the gear balancing<br />

supply and demand in the housing real estate market (the Mortgage as a financing source, 2007).<br />

The national SMHC is the developing system, which absolute and structural characteristics change under<br />

impact of external and internal, operated and uncontrollable factors. It should be flexible and variable enough to<br />

respond new requirements and possibilities, to develop new positive tendencies, to answer to inquiries of its<br />

participants adequately. At the same time it should be stable enough to exclude faults of its functioning, to provide<br />

performance predictability and to retain the trust of its actual and potential participants. The balance between<br />

stability and variability of the SMHC structure and parameters and its separate subsystems and elements appreciably<br />

depends on the rate and scale of innovations within the SMHC framework.<br />

The balanced development assumes observance of certain structural proportions between the SMHC and its<br />

subsystems characteristics. The balanced development principle is diverse in its displays and ways of realization and<br />

includes the followings:<br />

� balance of the SMHC strategic, tactical and operative regulation;<br />

527


� balance of differential and integrated management approaches;<br />

� balance of general social and economic development objectives and the objectives of the SMHC development;<br />

� balance of the SMHC and its environment;<br />

� balance of social and commercial objectives within the SMHC framework;<br />

� balance between the SMHC levels;<br />

� internal balance between the SMHC elements distinguished on the basis of various classification criteria;<br />

� balance of stability and variability;<br />

� balance between a competition and cooperation;<br />

� balance of risk and utility (profitability);<br />

� resource balance between the SMHC subsystems and elements;<br />

� balance of publicity and confidentiality;<br />

� balance of personified (individual) and standardized approaches.<br />

2.4.3 Development dynamics principles<br />

Dynamics of the SMHC development should submit to the following principles:<br />

� dynamic stability, i.e. observance of listed above balance parities in dynamics;<br />

� non-stationarity factors monitoring;<br />

� non-stationarity factors avoiding and elimination;<br />

� minimization of non-stationary processes caused by external factors negative consequences.<br />

2.4.4 Complex development principles<br />

The SMHC complex development should be provided with observance of the following principles:<br />

� application of mentioned above management, balanced development and development dynamics principles to<br />

the SMHC in whole and all its subsystems development;<br />

� plurality of subsystems determination criteria;<br />

� application of management, balanced development and development dynamics principles at all stages of<br />

management cycle, including account, analysis, forecasting and planning, plan execution and control;<br />

� openness;<br />

� sharing experiences and mutual training.<br />

2.5 SMHC development model<br />

The SMHC development model can be constructed in general on the base of relations caused by the SMHC<br />

designation and its elements functionality. Small life period of the national SMHC in combination with nonstationarity<br />

of its development during this time doesn't allow econometric methods application to determine the<br />

SMHC development model parameters. Model parameters are to be specified by computing experiments.<br />

Mortgage loans market increment is defined by its formerly reached volume, accessible financial resources and<br />

housing construction volume. <strong>Volume</strong> of financial resources in turn depends on population incomes and refinancing<br />

level provided by mortgage securities issue. We notice that foreign loans constitute the significant part of long-term<br />

credit resources of the Russian banks.<br />

Mortgage securities market increment depends on formerly reached refinancing level and on mortgage<br />

securities demand which in turn depends on population incomes. Excessive mortgage securities demand can<br />

stimulate crediting standards liberalization and/or mortgage securities interest rate decrease as a result enlarging<br />

mortgage securities risk.<br />

Housing construction activity of building branch is financed both by loans raised from SMHC and by another<br />

ways of housing acquisition expense. Alternative ways of housing acquisition expense are acquisition using budget<br />

facilities, acquisition using own facilities entirely, acquisition using loans not raised through the SMHC and amount<br />

of initial payments. It is obvious that the listed channels capacity is connected with population incomes.<br />

528


Population income change is, in our opinion, the significative characteristic of the SMHC development. We<br />

consider that feedback chain «crediting volume growth – construction volume growth – allied industries growth–<br />

population incomes growth» has duration comparable to mortgage lending terms. However, it is also necessary to<br />

consider incomes formed out of the SMHC.<br />

The dynamic SMHC development model can be presented in the form of linear non-uniform differential<br />

equations system with constant factors as follows:<br />

dV<br />

dt<br />

� a11V<br />

� a12<br />

I � a13<br />

R � a<br />

dR<br />

dt<br />

� a 21V<br />

� a 22 I � a 23 R<br />

dG<br />

dt<br />

� a31V<br />

� a32<br />

I<br />

dI<br />

dt<br />

� a 41V<br />

� a 42 I � F ( t )<br />

14<br />

G � Z ( t )<br />

where V –given out mortgage housing credits volume,<br />

R – mortgage based securities issue volume,<br />

G –housing construction volume,<br />

I –population incomes level,<br />

the disturbance terms:<br />

Z (t) –long-term credit resources volume,<br />

F (t) – independent of the SMHC income growth component,<br />

аij – factors reflecting current SMHC characteristics impact on its development.<br />

The disturbance terms and factors are the operated parameters which may be changed by standard restriction<br />

establishment or indirect governmental impact.<br />

It is supposed to analyze and test the SMHC development dynamic model presented in the form of linear nonuniform<br />

differential equations system by the following directions:<br />

1. definition of the system development trajectory passing the point corresponding to the current SMHC state<br />

at preset factor values and disturbance functions, i.e. Cauchy problem solving. It will allow to make the<br />

strategic forecast of SMHC development and to reveal necessity of operating means application;<br />

2. search of the particular solution moving the system from the current (initial) condition to the new<br />

(demanded) condition within the limited time at preset factor values and disturbance functions, i.e.<br />

boundary problem solving;<br />

3. definition of unknown functions and parameters of the system taking into account additional conditions<br />

imposed on its parameters, i.e. characteristic value problem solving.<br />

2.6 Uniform SMHC performance indicator system<br />

Practical observance of formulated above principles and the SMHC participant objectives achievement, in our<br />

opinion, characterize the degree of the SMHC development and its performance.<br />

The uniform SMHC performance indicator system should cover the SMHC on a vertical, allowing to estimate<br />

performance at federal, regional and individual levels, and across, allowing to estimate performance of separate<br />

participant categories. It should provide spatiotemporal comparability of indicators and their balance at every level<br />

(table 3).<br />

529


Calculation and<br />

Indicator group<br />

analysis level<br />

1. Potential indicators (mortgage lending availability) Federal<br />

Regional<br />

2. Indicators of macroeconomic and regional economic performance Federal<br />

Regional<br />

3. Social efficiency indicators Federal<br />

Regional<br />

4. Mortgage portfolio indicators<br />

4.1. Mortgage portfolio structural indicators Federal<br />

Regional<br />

Individual<br />

4.2. Mortgage portfolio profitability indicators Federal<br />

Regional<br />

Individual<br />

4.3. Refinancing indicators Federal<br />

Regional<br />

Individual<br />

4.4. Collateral Indicators Federal<br />

Regional<br />

Individual<br />

Table 3: Structure of the SMHC performance uniform indicator system<br />

The first three groups of indicators characterize the SMHC performance from the borrower point of view and<br />

display government objectives achievement, the fourth group of indicators characterize bank objectives achievement<br />

- demanded quality of mortgage portfolio (Kozlovskaya et al., 2008). Mortgage portfolio quality is determined by<br />

combination of profitability and risk determining in turn portfolio refinancing possibility (Oseledets, 2006). The risk<br />

level is concerned not only as borrower solvency loss probability but also as collateral insufficiency probability to<br />

repay the loan and interest.<br />

3 Summary<br />

The financial crisis sharpened the question of the Russian national SMHC further development. The SMHC<br />

problems and disproportions elimination and its moving to stable development trajectory demand to revise the point<br />

of view concerning to which the SMHC is the main instrument for large scale and extremely sharp housing problem<br />

overcoming.<br />

The SMHC performance depends on the adequacy of the social expectations connected with it to its real<br />

capacity. In our opinion, only those groups of population should participate the SMHC as borrowers, whose income<br />

level and its stability in combination with available savings allow to make initial payment for the obtained housing<br />

independently, to carry out credit obligations in due time and in full amount and to bear further burden of the<br />

obtained housing maintenance.<br />

Low incomes of great part of the population sustain the SMHC quantitative growth, but do not counteract to its<br />

qualitative improvement on the base of the suggested operated complex balanced development concept using the<br />

principles formulated in the paper. The government should regulate the SMHC functioning by market methods<br />

providing conditions for its proportional and systematic development. The main and indispensable condition is<br />

providing population incomes growth. The dynamic SMHC development model and the SMHC development<br />

uniform indicator system suggested in the paper can be used as planning, forecasting and performance evaluation<br />

tools. Thus the probability of the Russian SMHC crisis will be minimized.<br />

530


4 References<br />

Deliagin M.G. Ipotechniy krizis ne rassosiotsia (Делягин М.Г. Ипотечный кризис не рассосется. Банковское<br />

дело, 2008, №2. – C.20-23)<br />

Ipoteka kak istochnik finansirovaniya stroitel’noy deyatel’nosi (Ипотека как источник финансирования<br />

строительной деятельности. Региональные аспекты инновационной инвестиционной деятельности,<br />

Под ред. А.А. Румянцева. – СПб. ИРЭ РАНН, 2001)<br />

Kozlovskaya E.A., Guzikova L.A., Savrukova E.N. Povishenie effektivnosti bankovskogo ipotechnogo<br />

kreditovaniya (Козловская Э.А., Гузикова Л.А., Саврукова Е.Н. Повышение эффективности банковского<br />

ипотечного кредитования в России. Научно-технические ведомости СПбГПУ. Том 1, Экономические<br />

науки. Санкт-Петербург, СПбГПУ. 2008. №1.– С.175-185)<br />

Mints V.O. O faktorakh dinamiki tsen na zhiluyu nedvizhimost’ (Минц В. О факторах динамики цен на жилую<br />

недвижимость. Вопросы экономики. 2007. №2. – C. 111-121)<br />

Oseledets V.M. Otsenka pokazateley effektivnosti ipotechnogo bankovskogo kreditovaniya (Оселедец В.М.<br />

Оценка показателей эффективности ипотечного банковского кредитования. Сибирская финансовая<br />

школа. 2006. № 4. – С. 46-49)<br />

Polterovich V.M., Starkov O.Y. Formirovanie ipoteki v dogoniaiuschikh ekonomikakh: problema transplantatsii<br />

institutov (Полтерович В.М., Старков О.Ю. Формирование ипотеки в догоняющих экономиках: проблема<br />

трансплантации институтов. - М: Наука, 2007. – 196 с.)<br />

531


FOREIGN INVESTORS’ INFLUENCE TOWARDS SMALL STOCK EXCHANGES BOOM AND BUST:<br />

MACEDONIAN STOCK EXCHANGE CASE<br />

Dimche Lazarevski PhD, University American College Skopje, FYRO Macedonia<br />

Email: lazarevski@uacs.edu.mk, www.uacs.edu.mk<br />

Abstract. This paper aims to answer the question if and how much the foreign investors influence the boom and bust of small stock<br />

exchanges. It examines the impact of the foreign investors’ turnover towards small stock exchange turnover, particularly the<br />

Macedonian Stock Exchange. Based on the Macedonian Stock Exchange data for the period of January 2006 to July 2009, I find<br />

strong evidence that for a small and open stock exchange such as the Macedonian Stock Exchange, foreign investors substantially<br />

contributes to the Stock Exchange boom and bust.<br />

Keywords: Stock Exchange, Foreign Investors, Turnover, Linear Regression<br />

JEL classification: C35, G01, G12, N24, O16<br />

1 Introduction<br />

Global Financial Crisis has big impact on capital markets, especially stock exchanges. For those young and<br />

underdeveloped stock exchanges, such as the Macedonian Stock Exchange, which exists from 1995, the period from<br />

the beginning of 2005 until the middle of 2009 has been the most remarkable period that will be remembered for<br />

generations. The value of the basic Macedonian Stock Exchange index MBI10 in January 2005 was 1.000. At its<br />

peak, at the end of August 2007 the value rose to 10.057 or 905% growth for only 2 years and 8 months, and now, in<br />

April 2011 decreased to only 2.500, or 75% less than the highest value. This phenomenon evokes many questions,<br />

but maybe the most important one is “if and to what extent foreign investors were the reason for the Macedonian<br />

Stock Exchange boom and bust.<br />

The impact of foreign investors towards the big Stock Exchange was immense. For example, their withdrawal<br />

of US$35bn from the Russian stock market through early September 2007, forced the Russian Central Bank to<br />

defend the ruble, sending reserves down 4.2%. Russian Stock Exchange Index for the following 12 months lost<br />

almost 80% of its value. Foreign Investors withdrawals decreased more than 50% from Dow Jones Industrial<br />

Average.<br />

Underdevelopment of the Macedonian Stock Exchange did not challenge the scientific and professional public<br />

to analyze the reasons for its fast boom and bust. Therefore, this study addresses these issue by: (1) analyzing<br />

Macedonian Stock Exchange development since its beginnings in 1995, in order to determine the level of its<br />

development and exposure to influences from abroad, (2) investigating Macedonian Stock Exchange index MBI 10<br />

since its beginning at the end of 2004 in order to establish the reasons for the fast growth and reduction (3)<br />

measuring Foreign Investors impact on Macedonian Stock Exchange using linear regression on the percentage<br />

change in Foreign Investors and Macedonian Stock Exchange turnover (Risteski and Tevdovski, 2008 and Field,<br />

2005).<br />

2 Data and Model<br />

2.1 Macedonian Stock Exchange development<br />

The story of the Macedonian Stock Exchange began on September 13, 1995. This is the official date of the<br />

establishment of the first organized securities exchange in the history of the FYRO Macedonia. The conclusion of<br />

transactions was carried out on the "stock exchange floor" in the trading hall through the method of continuous<br />

auction and the model of "order-driven market" until 2001.<br />

In December 1999 the Securities Law was amended, which set the basis for dematerialization of securities and<br />

introduced the centralized record keeping of shares ownership in the FYRO Macedonia. In 2001, the trading on the<br />

532


stock exchange floor stopped and the electronic trading of securities started using the new "Bourse Electronic<br />

System of Trading" (BEST).<br />

The first official stock exchange index - the Macedonian Bourse Index (MBI) was promoted. The long-awaited<br />

process of complete dematerialization of shares in more than 670 companies in the country and the operation of the<br />

Central Securities Depository had begun. In 2002, price limitations were introduced for the listed securities on the<br />

Official Market from +/-10% of their last official average price, which contributed to the high volatility of the<br />

Macedonian Stock Exchange shares in the following years.<br />

In 2004 the listing criteria on the Official Market were raised to a higher level, the Unofficial Market was split<br />

into two parts - Market for Publicly Held Companies and a Free Market. The criteria for shares listing were<br />

toughened, and new continuous obligations for reporting were introduced (submitting of a three-month cumulative<br />

unaudited Income Statements, publication of a dividend calendar and notification of large owners). Publication of<br />

the new Exchange index was announced (MBI-10).<br />

A software application called SEI-NET was introduced in 2005, as the official and only way of delivering<br />

information from the listed companies to the Exchange. The new Securities Law was passed, with which a further<br />

harmonization of the regulation in the securities industry with the EU Directives and the principles of IOSCO<br />

(International Organization of Securities Commissions) was made.<br />

The main characteristic of the year 2006 was entrance of capital inflow from regional institutional investors into<br />

FYRO Macedonian capital market. Two private pension funds appear at the domestic market as new institutional<br />

investors. In 2006, state owned company for electricity supply and distribution “ESM” was privatized by selling<br />

certain percentage of their capital to foreign investor. During the year, the state presence on the securities market<br />

was considerable, due to the selling state owned capital in many joint stock companies among which the most<br />

significant was Makedonski Telekom AD Skopje and few more listed companies.<br />

Year 2007 was the most successful since the foundation of MSE back in 1995. The MSE realized record<br />

turnover of 41.7 billion € and MBI 10 index achieved its biggest value breaking the barrier of 10.000 index points,<br />

or 905% growth for 2 years and 8 months. As a comparison, Dow Jones Industrial Average for the same period<br />

(January 2005 – August 2007) rose only 23,87% (from 10.783,01 to 13.357.74), and Standard & Poor’s rose only<br />

21,62% (from 1.211,92 to 1473,99).<br />

The global financial crisis from the middle of 2007 has its impact on the Macedonian Stock Exchange as well,<br />

having in mind its openness and the immense exposure to foreign investors. Foreign institutional investors in order<br />

to cover their liquidity needs, raised from investor’s requests for stepping out of their investment funds, were forced<br />

to sell their stock positions especially those in FYRO Macedonia where highest returns were made.<br />

2.2 MBI 10 Index and Foreign Investors’ turnover<br />

The initial value of MBI 10 index was established at 1.000 on December 30, 2004. For only one year (2005), it grew<br />

to 2.292,04 or 129,2% (Figure 1, peak 1). Additional 61,54% the MBI 10 Index grew in 2006 to value of 3.702,54<br />

(peak 2). Only for the first quarter in 2007, the index grew for additional 48,06% to value of 5.481,92 (peak 3). On<br />

the end of June 2007, the index reaches the value of 6.917,51 or continuing the growth with additional 26,29% just<br />

for the second quarter (peak 4). Macedonian Stock Exchange and it’s MBI 10 Index reaches its peak on August 31,<br />

2007 when the value of the index stop at 10.057,77 (peak 5), which was 45,4% growth for only 2 months, 171,65%<br />

growth for those eight months in 2007 or 905,78% growth for only 2 years and 8 months when the index was first<br />

introduced and measured.<br />

On the end of 2007, MBI 10 starts declining for the first time for 16,61% for the fourth quarter and 23,04%<br />

from its peak in the end of August (peak 6). Decline of 13,35% was evidenced for the first quarter of 2008 and<br />

additional 27,16% for the second quarter of 2008, or 51,42% for only 10 months – from the peak (peak 7 and 8).<br />

Third quarter of 2008 finished with decline of 8,95% and the fourth quarter with additional 52,88% or 79,16%<br />

decline from the peak 16 months ago. MBI 10 index reach its bottom on March 10, 2009 when the value of the<br />

index was only 1.598,5 or 84,11% decline from the highest peak in August 2007 (peak 9). Second quarter of 2009<br />

533


finished with growth of 32,89% compared to the first quarter value or value of 2.532,43 (peak 10). This value has<br />

not change until today (April 29, 2011) with only small fluctuations.<br />

Source: Macedonian Stock Exchange data, atuhors’ own creation<br />

Figure 1. Macedonian Stock Exchange Index MBI 10<br />

Foreign Investors appear on the Macedonian Stock Exchange at the beginning of 2005. From Figure 2, we can<br />

see that first more significant turnover they realized at the beginning of 2006 when they had 4,8 billion MKD<br />

turnover 1 - period when the first more significant growth in MBI 10 can be noticed from Figure 1. Year 2006 was<br />

finished with 11 billion MKD turnover or 282% growth from the previous year, which result with the additional<br />

growth in MBI 10 Index of 61,54% on annual basis. In 2007 we can find the most significant activity of foreign<br />

investors since the total turnover in this year was 26,2 billion MKD or increase of 137,34% from the previous year<br />

(from peak 2 to peak 6 in Figure 1). In this year MBI 10 reach its peak on August 31, with growth of 171,65% only<br />

for those 8 months from that year.<br />

Source: Macedonian Stock Exchange Data, Authors’ own creation<br />

1 Foreign exchange rate for 1 € = 61,3 MKD<br />

P1<br />

P2<br />

P3<br />

P4<br />

P5<br />

Figure 2. Foreign Investors’ turnover<br />

534<br />

P6<br />

P7<br />

P8<br />

P9<br />

P10


After the first decline in the forth quarter of 2007, foreign investors’ turnover starts declining significantly, thus<br />

sending stocks into a tailspin. This can be explain partially with the decline in shares prices but mainly with the<br />

worries over a possible recession in the US. Since then, foreign investors decrease their turnover to minimum<br />

historical levels, investing their money in more developed stock exchanges where the first signs that the global<br />

financial crisis has finished were noticed through the positive rise in the value of their Indexes. For example from<br />

the bottom at the value of 6.626,94 on March 6, 2009 the index Dow Jones Industrial Average rose to 12.810,54 on<br />

April 30, 2011 or 93,31% 2 ; S&P 500 Index from the minimum value of 683,38 on March 6, 2009 rose to 1.363,61<br />

on April 30,2011 or 99,54%.<br />

According to Sumanjeet and Paliwal (2010), the increase in the volume of foreign institutional investment<br />

inflows in recent time has led to concerns regarding the volatility of these flows, threat of capital flight, its impact on<br />

the stock markets and influence of changes in regulatory regimes. They revealed that any problem related to foreign<br />

institutional investors is the problem of management and that India should develop new tools to manage foreign<br />

institutional investors effectively and efficiently. This can be good direction towards which the Macedonian Stock<br />

Exchange and the security market regulators should be acting in the next period if immense volatility wants to be<br />

eliminated and more stable security market established. This can be crucial for the future preservation of domestic<br />

investors, their investments and participation in the security market.<br />

2.3 Linear regression<br />

In determining the influence of foreign investors on the Macedonian Stock Exchange, quantitative analysis has been<br />

used. In the analysis, applied is simple linear regression to assess the impact of the percent change in the foreign<br />

investors’ turnover (independent variable) on the percent change in Macedonian Stock Exchange turnover<br />

(dependent variable). The elaborated period includes monthly data from January 2006 to July 2009 and covers 42<br />

observations. For the analysis, data from the Macedonian Stock Exchange have been used (Table 2).<br />

The theoretically substantiated result, for a small and open economy such as FYRO Macedonia, is that the<br />

higher Foreign Investors’ turnover, substantially contributes to higher turnover on the Macedonian Stock Exchange.<br />

The obtained results indicate high positive linear correlation of 95.9% between the observed variables.<br />

Multiple R 0,959<br />

R Square 0,919<br />

Adjusted R Square 0,917<br />

Standard Error 0,893<br />

Observations 42<br />

Source: Authors’ own calculations<br />

Table 1: Regression Statistics<br />

Adjusted coefficient of determination that is also high, shows that the share of the explained variability in the<br />

total variability was 91.7%, or it can be interpreted that variations or changes that cause the change of the<br />

Macedonian Stock Exchange turnover is 91.7% explained with the changes caused in the Foreign Investors’<br />

turnover, while the rest 8.3% is caused by variations of domestic investors’ turnover.<br />

2 http://www.google.com/finance?client=ob&q=INDEXDJX:DJI<br />

535


Macedonian<br />

Stock Exchange<br />

Turnover 3<br />

% change of<br />

Macedonian Stock<br />

Exchange Turnover<br />

Foreign investors’<br />

turnover 4<br />

% change of Foreign<br />

investors’ turnover<br />

January 2006 729.821.735 n/a 378.193.623 n/a<br />

February 2006 577.553.739 -20,86% 209.652.007 -44,56%<br />

March 2006 11.686.914.615 1923,52% 4.816.177.513 2197,22%<br />

April 2006 675.779.595 -94,22% 408.103.297 -91,53%<br />

May 2006 1.444.799.993 113,80% 695.526.717 70,43%<br />

June 2006 6.973.598.598 382,67% 592.755.881 -14,78%<br />

July 2006 805.369.545 -88,45% 374.738.449 -36,78%<br />

August 2006 1.633.634.810 102,84% 563.604.009 50,40%<br />

September 2006 1.080.810.050 -33,84% 441.510.905 -21,66%<br />

October 2006 2.247.645.453 107,96% 415.140.115 -5,97%<br />

November 2006 1.191.008.537 -47,01% 548.340.330 32,09%<br />

December 2006 1.970.997.263 65,49% 1.620.751.049 195,57%<br />

January 2007 4.223.183.976 114,27% 3.074.477.935 89,69%<br />

February 2007 1.939.990.193 -54,06% 1.079.798.541 -64,88%<br />

March 2007 3.113.169.854 60,47% 1.484.982.020 37,52%<br />

April 2007 4.726.054.924 51,81% 2.268.978.969 52,80%<br />

May 2007 4.288.787.998 -9,25% 2.289.783.912 0,92%<br />

June 2007 1.682.118.805 -60,78% 982.862.018 -57,08%<br />

July 2007 2.862.587.167 70,18% 3.008.865.371 206,13%<br />

August 2007 3.484.656.539 21,73% 1.406.755.845 -53,25%<br />

September 2007 3.633.652.176 4,28% 1.803.018.210 28,17%<br />

October 2007 6.660.933.629 83,31% 5.188.867.297 187,79%<br />

November 2007 3.690.422.068 -44,60% 3.071.538.287 -40,81%<br />

December 2007 1.396.763.119 -62,15% 600.049.436 -80,46%<br />

January 2008 799.744.209 -42,74% 410.988.549 -31,51%<br />

February 2008 1.343.493.855 67,99% 901.081.329 119,25%<br />

March 2008 1.039.593.255 -22,62% 617.414.434 -31,48%<br />

April 2008 1.122.998.859 8,02% 896.826.889 45,26%<br />

May 2008 550.902.297 -50,94% 308.009.474 -65,66%<br />

June 2008 817.860.186 48,46% 536.025.566 74,03%<br />

July 2008 833.487.050 1,91% 697.878.707 30,20%<br />

August 2008 1.738.052.930 108,53% 1.800.101.420 157,94%<br />

September 2008 732.285.137 -57,87% 363.799.256 -79,79%<br />

October 2008 1.248.960.065 70,56% 778.351.913 113,95%<br />

November 2008 482.533.487 -61,37% 165.605.493 -78,72%<br />

December 2008 1.664.049.515 244,86% 475.252.541 186,98%<br />

January 2009 350.616.520 -78,93% 384.731.507 -19,05%<br />

February 2009 354.127.668 1,00% 284.187.454 -26,13%<br />

March 2009 367.193.897 3,69% 156.461.320 -44,94%<br />

April 2009 1.519.038.214 313,69% 233.628.077 49,32%<br />

May 2009 856.912.383 -43,59% 354.504.653 51,74%<br />

June 2009 570.652.166 -33,41% 103.002.716 -70,94%<br />

July 2009 658.728.115 15,43% 163.232.827 58,47%<br />

Source: Macedonian Stock Exchange Data, Authors’ own creation<br />

Table 2: Macedonian Stock Exchange Data<br />

3 Macedonian Stock Exchange Turnover represent the total value of bought or sold securities for that month<br />

4 Foreign Investors’ turnover represent the total value of both bought and sold securities for that month by the foreign investors<br />

536


Source: Authors’ own calculations<br />

Figure 3. Line Fit Plot<br />

The standard error of regression is small, amounting to 0.893, and shows us the absolute deviation of the<br />

empirical data from the regression analysis of the sample. The regression equation of the sample is as follows:<br />

'<br />

y � b � b x � 0,<br />

105 � 0,<br />

858x<br />

0<br />

1<br />

i<br />

i<br />

Stretch of the Y-axis or value of the dependent variable when the independent variable X = 0 is the value of<br />

regression parameter b 0 � 0,<br />

105.<br />

Regression parameter 1 0,<br />

858 � b indicates the change in the dependent variable Y<br />

when the independent variable X will increase by one of its unit. In this case, it suggests that if the Foreign<br />

Investors’ turnover were increased by 1%, then the Stock Exchange turnover on average would increase by 0.858%.<br />

Estimated value of this ratio with the risk of error of 5% will range from 0.777% to 0.940%. Testing the regression<br />

coefficient b1<br />

� 0,<br />

858 gives p value of 0.000. This value indicates the smallest level of significance for which the<br />

zero hypothesis H 0 : �1<br />

� 0 can be dismissed and the alternative hypothesis H1 : �1<br />

� 0 accepted. In our case, the<br />

alternative hypothesis is accepted, i.e. the regression coefficient b1<br />

� 0,<br />

858 also statistically is confirmed that is<br />

different from 0, or that the independent variable truly affects the dependent variable. Residual Plot Figure shows<br />

the standardized residuals of the regression on Y-axis and the predicted standardized values of the regression on<br />

X-axis.<br />

Source: Authors’ own calculations<br />

Figure 4. Residual Plot<br />

537


Source: Authors’ own calculations<br />

Figure 5. Normal probability plot<br />

From the previously elaborated, it can be concluded that the assumption of full linearity is fulfilled, but the<br />

same it cannot be entirely declared for the assumption of homoscedasticity or equality of residuals.<br />

All this statistically confirms the theoretical expectation that Macedonian Stock Exchange turnover positively<br />

depends on the Foreign Investors’ turnover.<br />

3 Summary<br />

From the analysis of the Macedonian Stock Exchange development since its beginnings in 1995, having in mind the<br />

small history of existence, it can be concluded that the Stock Exchange is still underdeveloped and too much opened<br />

and exposed to influences from abroad. Macedonian Stock Exchange Index MBI 10 for only 2 years and 8 months<br />

grew 905%, and for the following 18 months was reduced for 84%, which is an evidence for very high volatility.<br />

Investigating the reasons, we can conclude that despite the global financial crises, direct reason for the fast growth<br />

and reduction of the MBI 10 Index was the foreign investors’ turnover.<br />

The theoretically substantiated result, for a small and open economy such as FYRO Macedonia, is that the<br />

higher Foreign Investors’ turnover, substantially contributes to higher turnover on the Macedonian Stock Exchange.<br />

The obtained results from the regression indicate high positive linear correlation of 95.9% between the observed<br />

variables.<br />

Both theoretically and empirically, this study proved that foreign investors substantially contribute to the<br />

Macedonian Stock Exchange boom and bust. In order to be protected from future immense volatility, FYRO<br />

Macedonian security market regulators need to establish more sound rolls that will enable effective and efficient<br />

management of the stock exchange. This should provide stability and eliminate the negative influence coming from<br />

the openness and exposure of the Macedonian Stock Exchange to foreign investors.<br />

4 Acknowledgements<br />

University American College Skopje supported this Research paper. I appreciate the effort of Marjan Petreski for his<br />

critical review and Aleksandar Ajevski for his assistance during data collection.<br />

538


5 References<br />

Field, A. (2005). Discovering Statistics Using SPSS. Second edition, Sage publications<br />

http://www.mse.org.mk/ReportDetail.aspx<br />

http://www.google.com/finance<br />

Risteski, S., Tevdovski, D. (2008). Statistics for Business and Economics, Economic Faculty - Skopje<br />

Sumanjeet, S., Paliwal, M. (2010). Liberalization of foreign institutional investments (FIIS) in India: Magnitude,<br />

impact assessment, policy initiatives and issues. Global Journal of International Business Research, 2010, Vol.<br />

3. No. 3, pp 22-41<br />

539


GENDER BIAS IN HIRING, ACCESS TO FINANCE AND FIRM PERFORMANCE: EVIDENCE FROM<br />

INTERNATIONAL DATA *<br />

Nigar Hashimzade<br />

School of Economics, University of Reading, United Kingdom<br />

Yulia Rodionova †<br />

Leicester Business School, De Montfort University, United Kingdom<br />

October 18, 2010<br />

Abstract. We analyze the effect upon the firm's performance of the interaction between an employer and<br />

an employee, with the possibility of gender-based bias in hiring. In the presence of bias the characteristics<br />

of a successful employer-employee match may be independent of the employee's productivity. In a model<br />

with imperfect information and signalling we make some predictions about the impact of the gender of the<br />

owner and the top manager on the firm's performance and on the degree of discrimination in access to<br />

finance. Further, using firm-level data from twenty-six countries of the former Soviet bloc from the<br />

Business Environment and Enterprise Performance Survey in 2009, we explore the effect of the gender of<br />

the owner and the manager on a firm's performance, both directly and through access to finance, to test the<br />

model's hypotheses. We find that a hiring policy based on individual characteristics other than productivity<br />

(after controlling for various factors such as industry, country and possible constraints on access to external<br />

finance) has a negative effect on firm's performance. Our empirical findings are consistent with the<br />

predictions of the model. In particular, we find that firms with female top managers perform better than the<br />

firms with male top managers; this effect is mitigated if the firm is owned by a female, or if it operates in<br />

the industry where female-style leadership is valued. The effect is also less pronounced for those female top<br />

managers who started working in the same industry under the Soviet system - before 1992. If the<br />

unobserved heterogeneity in the firms' performance is caused by the difference in the entrepreneurial skills<br />

of both the owner and the top manager, our findings suggest, in line with the model predictions, that the<br />

average skills of female owners (managers) are lower (higher) than those of their male counterparts. We<br />

also find an evidence of spillovers of discrimination in the labour market to statistical discrimination in<br />

access to finance. Our results suggest that gender bias in hiring may lead to an inefficient allocation of<br />

physical and human resources.<br />

Keywords: discrimination, entrepreneurship, finance, gender, job sorting, signalling, small and medium<br />

enterprises<br />

* This version: October 15, 2010. Based on a previous version by Hashimzade et al. (2010).<br />

† y Corresponding author. Email: yrodionova@dmu.ac.uk . The authors are very grateful to Mark Casson, Susan Marlow and<br />

Natalia Vershinina as well as participants of the EMFT conference in Milas, Turkey, June 2010 and WIEM2010 conference for<br />

helpful comments and suggestions.<br />

540


INTELLECTUAL CAPITAL DIMENSIONS AND QUOTED COMPANIES IN TEHRAN EXCHANGE,<br />

BASED ON BOZBURA MODEL<br />

Rasoul Abdi<br />

Department of accounting, Islamic Azad University of Bonab Iran<br />

Abdi_rasool@yahoo.com<br />

Nader Rezaei<br />

Department of accounting, Islamic Azad University of Bonab Iran<br />

naderrezaeimiyandoab@gmail.com<br />

Yagoub Amirdalire Bonab<br />

Department of accounting, Islamic Azad University of Bonab Iran<br />

amirdalir@yahoo.com<br />

Abstract:The purpose of present study is the presentation of a proper definition of intellectual capital, its components, and the<br />

relationship between intellectual capital and market value of quoted companies in Tehran exchange based on Bozbura model. Mostly,<br />

the components of intellectual capital were defined in three dimensions of human capital, relation capital and structural capital. The<br />

researchers tried to represent the effects of intellectual capital on the market value of quoted companies in Tehran exchange. This<br />

study addressed four hypotheses based on the nature and significance of the study. Based on the results of data analysis, there was a<br />

significant relationship between human capital and market value of quoted companies in Tehran exchange. Also, structural capital was<br />

significantly related to human capital and relation capital.<br />

Keywords: Market value, intellectual capital, human capital, structural capital, relation capital, and Tehran exchange.<br />

Introduction<br />

Intellectual capital (IC) information gained importance because it is seen as an integral part of firms’ value-creating<br />

processes (Bukh, 2003). Nowadays, intangible assets such as staff’s skills, strategic and process quality, software,<br />

patents, brands, supplier and customer relationship provide a great involvement to the successes in many corporate,<br />

so as considered as valuable assets. These assets are delivering a fast-growing contribution to corporate<br />

competitiveness and usually are classified as intellectual capital (IC) (Hofman, 2005). The IC is defined as<br />

intellectual resources that have been “formalized, captured, and leveraged” to produce higher value assets (Prusak,<br />

1998). There are many models and classifications on intellectual resources in literature. Most of them could be<br />

termed as the Sveiby-Stewart-Edvinsson model (Bukh et al., 2001). The model consists of human capital (HC),<br />

structural capital (SC), and relationship capital (RC). According to Sveiby (1997), the HC involves capacity to act in<br />

wide variety of situation to create both tangible and intangible assets. Stewart (1997) emphasized that the primary<br />

purpose of a HC is innovation of new products and services or of improving in business process.Although the<br />

importance of intellectual capital (IC) has increased greatly in the last two decades (Serenko and Bontis, 2004),<br />

many organizations are still struggling with better management of IC due to measurement difficulties (Dzinkowski,<br />

2000). Many authors have argued that IC, which represents the stock of assets generally not recorded on the balance<br />

sheet, has become one of the primary sources of competitive advantage of a firm (Bontis, 1996; 1998, 2001;<br />

Edvinsson and Malone, 1997; Roos et al., 1998; Stewart, 1997; Sveiby, 1997). Given the remarkable shift in the<br />

underlying production factors of a business within the new knowledge economy (Drucker, 1993), it is important for<br />

firms to be aware of the elements of IC that could lead to value creation. Knowledge management and its related<br />

area emphasize that, achievement of the permanent competition advantage, in concept of global modern economy,<br />

is depended on the organization's capacities and abilities, and proper usage of organization's knowledge based<br />

sources. It must be mentioned that, all of the organization's sources are not in a same importance level. The<br />

structures of organization's properties are in change. In the past the objective properties of organizations were more<br />

significant than the subjective properties of organizations but nowadays the subjective properties are more<br />

important.The modern economy which is formed of knowledge and information has the effective role on intellectual<br />

capital in research and business branch. The main purpose of the present study is the presentation of a proper<br />

definition to intellectual capital, its components, and the relationship between intellectual capital and market value<br />

of quoted companies in Tehran exchange, based on Bozbura model . In this area many researches have been done<br />

yet, such as (Anvari Rostami & Rostami, 2003), (Bontis, 2002), (Kujansivu & Lonnqvist, 2008), (Mouritsen, 2001),<br />

541


(Lynn, 2000), and some articles presented in Copenhangen city of Denmark by European Accounting<br />

Association(EAA).<br />

Accounting for intellectual capital<br />

Although the writers including Reich (1991) and Stewart (1991) had previously identified the growing importance<br />

of intellectual capital as a source of long-term value creation for organizations, it was in the mid-1990s that interest<br />

in it began to escalate. At that time a number of popular texts on the subject were published, including Brooking<br />

(1996), Edvinsson and Malone (1997), Roos et al. (1997), Stewart (1997) and Sveiby (1997), together with<br />

Edvinsson’s seminal 1997 paper on Skandia AFS’s pioneering work in intellectual capital management. From a<br />

specifically accounting perspective, the emergence of intellectual capital as a key organizational resource raised the<br />

question of how to report it alongside other assets in financial statements. This was not a new problem, however, as<br />

there have long been key organizational assets for which it has not been possible to report values. The goodwill built<br />

up by a business could only be included within the stock of intangible assets when it was acquired by another<br />

business, subsequently to be amortised or more commonly written of at the time of acquisition. Specific intangible<br />

assets such as brands, themselves later to be included within the designation of intellectual capital, were subject to<br />

the same provisions. Attempts to account for people”, or more specifically their experience, expertise, creativity,<br />

organizational commitment, etc., currently designated as the human capital component of intellectual capital, had all<br />

but been abandoned after first human asset, then human resource accounting had slipped down the research agenda<br />

in the late 1970s.<br />

The problem of accounting for intellectual capital was clearly one that could not be ignored. Intellectual<br />

capital’s growing importance was underlined by the growing disparity between the book values of many businesses,<br />

determined in accordance with prevailing financial accounting and reporting provisions, and the market values of<br />

these same entities. The extent of what Edvinsson 1997) termed the “hidden value” of organizations was frequently<br />

greatest in the knowledge-based industries, precisely the places where intellectual capital is most critical to longterm<br />

business performance. This situation quickly became more apparent with the rise of the dot.com companies in<br />

the late 1990s, although post-Enron concerns about the veracity of financial statements and the general downturn in<br />

the global economy following the events of September 2001 have seen a reversal in this upward trajectory. The<br />

worry was, and remains, that disparities of this sort have the capacity to disrupt the works of the capital market. In<br />

order to ensure that this did not occur, some means had to be identified for reporting intellectual capital to the<br />

market. Coupled with this, the growing tendency to link executive remuneration to share price meant that the<br />

accountancy profession was under great pressure to demonstrate the true value of the business in its financial<br />

statements If it had been possible to identify some simple means of extending the established accounting calculus to<br />

incorporate intellectual capital, the (on-going) debate about accounting for intangible assets would have already<br />

provided clear indications on how to proceed. It had not, which meant that the accountancy profession was not well<br />

placed to deliver reliable information of the sort many stakeholders might, not unreasonably, expect of it. In<br />

Edvinsson (1997) the dilemma facing the accountancy profession is clearly visible.<br />

Edvinsson argues that a preferable approach is one that provides information on the success with which an<br />

organization’s management has grown the stock of intellectual capital. Such information would, of necessity, be of a<br />

more prospective nature, and thereby of greater relevance to stakeholders interested in the sustained value creation<br />

capacity of the business. The Navigator was designed to provide information on the human, customer, process and<br />

renewal and development foci of the organization, in addition to financial information, and to do so using a set of<br />

indicators that would also convey how the organization viewed its own value creation strategy. Initially, Skandia<br />

reported such information in the form of brief supplements to its financial statements, quickly developing a more<br />

comprehensive approach in the late 1990s (Mouritsen et al., 2001a). In retrospect, it is possible to view the Balanced<br />

Scorecard as providing an alternative approach to the Navigator (Kaplan and Norton, 1992, 1993, 1996), while<br />

Sveiby’s Intangible Assets Monitor offers a third approach (Sveiby, 1997). Lev (2001) has subsequently developed a<br />

Value Chain Scoreboard that makes use of a possible nine different information foci.<br />

As if to emphasise the difference between reporting the success (or otherwise) of intellectual capital<br />

management from traditional financial reporting, the various scorecard approaches normally commended the<br />

extensive use of non-financial indicators. These are selected for their relevance to the task in hand, an idea more<br />

commonly associated with managerial accounting than financial ccounting. In the last analysis, however, scorecards<br />

542


affirmed the value of employing a quantitative approach to reporting, something that was soon to be challenged as<br />

researchers sought to develop a second wave of intellectual capital reporting frameworks.<br />

In the vanguard of this development were researchers associated with the Danish Agency for Trade and<br />

Industry, which funded a programme beginning in 1998 and continuing to date. What these researchers commend is<br />

a narrative approach to intellectual capital accounting and reporting, using what are termed as Intellectual Capital<br />

Statements. The basis for such statements is the organization’s knowledge narrative (subsequently, termed the<br />

management narrative), from which is adduced a number of key management challenges. These in turn inform a set<br />

of management challenges for which relevant indicators are identified. The whole information set (including<br />

financial statements) is then reported to stakeholders on a regular basis, using a range of representational forms inter<br />

alia extensive narratives. A similar approach was also commended in the final report of the Meritum research group,<br />

the Intellectual Capital Report combining three elements: the vision of the firm; the summary of intangible resources<br />

and activities; and the system of indicators (Meritum, 2002). When discussing the indicators to be used in the Value<br />

Chain Scoreboard, Lev (2001, p. 115) comments that they “should satisfy three criteria to ensure maximal<br />

usefulness.”. The three criteria are: quantitative in nature; standardized (or easily standardized) to permit inter-firm<br />

comparison; and empirically linked to corporate value. This observation is at odds with the trajectory implicit in the<br />

move to narratives. In the case of narratives, the objective would seem to be that of developing an intellectual capital<br />

account that adds value by virtue of the richness of its information content and reflexive nature. A strong divergence<br />

of approach seems to be occurring. Whilst the European model is moving in a strongly qualitative direction, the<br />

North American model is seeking to retain as much of a “hard number emphasis as possible. This is evident in the<br />

case of the Value Creation Index (Cap Gemini, 2000; Low, 2000) in which an organisation’s scores in respect of<br />

nine intangible value drivers are weighted in an attempt to determine an index that might be compared with like<br />

organisations. Lev himself has also been active in developing a Knowledge Capital Earnings methodology (Gu and<br />

Lev, 2001; Stewart, 2001; Tayles et al., 2002). In common with the earlier Calculated Intangibles Value metric<br />

(Dzinkowski, 1999), this approach is designed to generate market-friendly intangibles valuations incorporating<br />

reliable and robust financial information.<br />

Origins of the Study<br />

This part of study is about the concept and components of intellectual capital based on presented model. Until now,<br />

the definitions of IC have been discordant. In recent years, driven by necessity, many individuals and groups from<br />

different disciplines have tried to agreeon a standard definition for IC (Edvinsson and Malone, 1997). Initially,<br />

Edvinsson and Malone’s (1997) and Stewart’s (1997) research helped to bring the term to the forefront. Edvinsson<br />

and Malone (1997, p. 358) defined IC simply as “knowledge that can be converted into value.” Stewart (1997, p. x)<br />

broadened the definition to IC as “intellectual material – knowledge, information, intellectual property, experience –<br />

that can be put to use to create wealth” by developing competitive advantage in an organization. When intellectual<br />

material is formalized and utilized effectively, it can create wealth by producing a higher value asset, called IC. The<br />

intellectual capital is a part of the total capital which is knowledge based and belongs to company. Based on this<br />

definition, intellectual capital includes the knowledge transferred to intellectual possession, intellectual assets of a<br />

company, and the final results of the transferring process. The standard definition of moral ownership is included<br />

some ownership rights such as patent, trade mark and copy right. These assets are the single form of intellectual<br />

capital which is appropriate for accounting aims (Anvari Rostami & Rostami, 2003).<br />

Intellectual capital consists of knowledge, information, intellectual assets and experiments used for wealth<br />

gathering. It includes subjective group abilities or key knowledge as a group (Bontise, 2000).<br />

Intellectual capital includes various intangible sources which are valuable to an organization (Kujansivu &<br />

Lonnqvist, 2008).<br />

Mostly, in all models of the intellectual capital assessment which have designed, its components are defined in three<br />

dimensions of human capital, relation capital and structural capital.<br />

543


Human Capital<br />

The human capital, which can be defined as the first dimension of intellectual capital, is the abilities, skills, and the<br />

proficiency of organization's personnel. The human capital enjoys thought and involves all the abilities and skills of<br />

the organization personnel (Lynn, 2000). The combination of Sveiby and Stewart definitions of HC if given by<br />

Edvinsson and Malone (1997) who defined it as combinations of knowledge, skill, innovativeness, and ability of the<br />

company’s individual employees . In other words, the human capital is consisted of general and professional<br />

knowledge of personnel, leadership abilities, risk and the problems solving. The main purpose of human capital is<br />

goods innovation, services and business importance and the improvement of business process (Mouritsen, 2001).<br />

The most important indexes of human capital are the professional competences of personnel, the experiments, the<br />

knowledge, the number of company personnel with the earlier related knowledge, and the accurate distribution of<br />

responsibilities and named as the individual capability in the model of “Intangible Asset Monitor” (Sveiby, 1997),<br />

and in the “Balance Score Card” model, it is named as the learning and development dimension (Kaplan and Norton,<br />

1999). Therefore, when forming our model, one dimension of this model should be saved for the human capital .<br />

Relation Capital<br />

Relation capital as a second dimension of intellectual capital is the representation of company's relations to the out<br />

world. It includes out organization's dependence such as customer's loyalty, company's famous, and company's<br />

relations with the provider of its sources. On considering the relationship between company and outside, it is<br />

observed that there are other factors which are more effective than customers. So the producer relations and society<br />

relations must be defined (Bozbura, 2004). Relational capital comprises the knowledge embedded in all the<br />

relationships an organization develops, whether it is with customers, competitors, suppliers, trade ssociations or<br />

government bodies (Bontis, 1999). One of the main categories of relational capital is usually referred to as customer<br />

capital and denotes the “market orientation” of the organization. There is no consensus on a definition of “market<br />

orientation” (Bontis et al., 2001), but Kohli and Jaworski (1990) defined it as the organization-wide degree of<br />

market intelligence generation, dissemination, and action based on the current and future needs of customers.<br />

Relation capital includes the company's possessions grant, the company's relations with people, and organization<br />

related to customers, customer's preservation or lose rate, market share, and also net profitably per each customer<br />

(Anvari Rostami & Rostami, 2003).<br />

Structural Capital<br />

Structural capital is the third dimension of intellectual capital which includes capacities to percept of market needs<br />

such as patent, knowledge based structures, organizational processes and cultures. This dimension of organizational<br />

asset sometimes is named intellectual asset, ultra structure asset, innovation capital or process capital (Bozbura,<br />

2004) The Structural dimension is defined in the intellectual capital as the Structural capital. The Structural capital is<br />

the sum of all assets that make the creative ability of the organization possible. The mission of the firm, its vision, its<br />

basic values, strategies, working systems, and in-firm processes can be counted among these assets. Structural<br />

capital is one of the foundation stones of creating learning organizations. Even though the employees possess<br />

adequate or high capabilities, an organizational structure that is made up of weak rules and systems and which<br />

cannot turn these capabilities into a value, prevents the firm from having a high performance. In contrast, a strong<br />

Structural capital structure creates a supporting environment to its workers and thus leads to workers’ risk taking<br />

after their failures. Besides, it leads to the decrease of the total cost and to the increase of the firm’s profit and<br />

productivity. Therefore, the Structural capital is a vital structure for organizations and in an organizational level; it<br />

has a critical importance for the realization of measuring the intellectual capital (Bontis, 1998, 1999, 2001,<br />

2003).Many factors are defined among the models that are made in order to measure the Structural capital. Visible<br />

assets such as the patents of the firm, copyrights, databases, computer programs and intangible assets such as the<br />

methods related to business management, company strategies, the culture of the company is among these factors.<br />

The high investments of technology or the high number of computer and programs in a firm us not feature, which<br />

adds a plus value to a firm. In order for these to make a contribution to the company, the workers in the firm should<br />

have the abilities to use these systems to interpret the results, to make them knowledge and to use them in the<br />

relations (Fitz-enz, 2001). As long as they are not put to use, the existence of systems that possess and transmit<br />

knowledge, which is the foundation stone of the Structural capital, is not means of adding value. Therefore, it would<br />

544


e wrong to claim that the Structural capital has a direct and linear relationship with the performance of the<br />

company.<br />

Development of research hypotheses<br />

The set of hypotheses in this study that explores the relationship between human, structural and relational capital<br />

and market value of quoted companies in Tehran exchange based on Bozbura model . Using a questionnaire for data<br />

collection, Bontis (1998) found a significant relationship between each element of IC (human, organizational, and<br />

relational) and firm market value. Several financial measures such as profit, profit growth, sales growth and ROI<br />

were used as indicators of firm performance. He also found significant associations between IC elements. In another<br />

study that used the same questionnaire, Bontis et al. (2000) studied the inter relationship of the components of IC<br />

and the relationship between structural capital and firm performance.<br />

Therefore, we can our hypotheses related to the human, structural and relational capital as thus<br />

H1. There is a significant relationship between human capital and market value of quoted companies in Tehran<br />

exchange.<br />

H2 .There is a significant relationship between relation capital and market value of quoted companies in Tehran<br />

exchange.<br />

H3 .There is a significant relationship between structural capital and human capital.<br />

H4.There is a significant relationship between structural capital and relation capital<br />

According to above discussion, For these hypotheses, we can define our model of research as shown in Figure 1<br />

_<br />

_<br />

Methodology<br />

Positive relation<br />

H: hypothesis<br />

Human capital<br />

H3<br />

Structural capital<br />

H4<br />

Relation capital<br />

Figure 1. Theoretical framework of research hypotheses<br />

Company's<br />

market<br />

value<br />

The analysis of the data was on the basis of descriptive statistics, which allows the researchers to describe the<br />

sample data. The qualitive research model is the former of structural theory for quantitive analysis (Sieber, 1973).<br />

545<br />

H1<br />

H2


From the time point of view, present study is a segmental one which discusses on managers, nurses, and experts<br />

points of view of 2010.<br />

The population of the present study was consisted of all the quoted companies 442 companies which were in several<br />

industries- on Tehran exchange. Because of the company's span and their dispersal in Iran, there was not an access<br />

for considering all of the population, so the population was considered by using statistic methods. The research<br />

population was a normal distribution. It is used COCHRAN formula for sample capacity determining by supposing<br />

below:<br />

By supposing the hypothesis verification proportion as (P) and the hypothesis deficiency proportion as (q), in equal<br />

situation of 50%, error estimate (d =10%)10% and distance assurance of 95% (t= z =1/96), the content sample of<br />

research was determined on the base of Cochran formula:<br />

N: population sample extent<br />

n: selected sample extent<br />

Finally, with the selected sample content of 78 companies at error estimate of 10% and coefficient assurance of<br />

95%, it was clamed that the selected sample had the all characteristics of a population. So, the results were<br />

generalized to the total of population.<br />

It was adjusted to a questionnaire for hypotheses assuring. This questionnaire consisted of 33 questions which<br />

included five choice according to Likert's scale. There were 10 questions on human capital area, 11 question on<br />

relative capital and 12 questions on structural capital. It must be mentioned that all of the questions were determined<br />

according to below scales:<br />

1. The scales were according to organizational structures of Iranian companies.<br />

2. There was not any industrial prejudice on questionnaire.<br />

3. The introduced scales were qualitive one which could be discussed with the quantitive dimension.<br />

4. Questions were significant. Because of the fact that the problem of the present study was the sufficient<br />

unfamiliarity of managers and experts with the concepts of intellectual capital, the scales, and the definition<br />

of intellectual capital, the scales, definition of intellectual capital, and its components were submitted them,<br />

too.<br />

Human Capital Scales<br />

Bontis has defined some dimensions of human capital such as the satisfaction of the personnel, underwriter<br />

company, motivation, staff preserving, leadership and management, knowledge producing, knowledge distributing,<br />

learning, knowledge assembling and the period assigned for personnel instructions (Bontis, 2002).Also, in other<br />

study done by Miller, there have been defined some dimensions such as the workers industrial knowledge, the<br />

workers learning expense, and the workers high level education such as M.A and P.H.D (Miller, 1999). On<br />

designing of the questionnaire, some scales were regarded such as the personnel informing on various information,<br />

encouragement of group work, personnel innovation and risk, the personnel ideal level of general skills, and the<br />

importance of investment on instructions.<br />

Relation Capital Scales<br />

2<br />

N(<br />

t �(<br />

p.<br />

q))<br />

n � 2 2<br />

N(<br />

d ) � t �(<br />

p.<br />

q)<br />

Relative capital is one of the most important dimensions of the intellectual capital which includes the relationship<br />

between company's parts on value chain. It is clear that the fundamental scales of relation capital depend on<br />

customer and market. So, besides the shareholders who are the important part of the company, the producers and<br />

society must be defined in relation capital. In question designing of relation capital there were regarded some scales<br />

546


such as customer loyalty, customer satisfaction, sale's volume of permanent customers, the number of customer's<br />

complaints, the extent of customer's information using in company, customer based information and the market<br />

requirements.<br />

Structural Capital Scale<br />

Structural capital is the whole of the assets which make possible the organization creatorship ability. In the research<br />

done on Canadian industry, there were some scales which regarded on question designing such as earned income in<br />

lieu of R&D expenditure, access to information bases, the presentation manner of new product, creatorship support,<br />

management efficiency of company's information systems and financial results created in organization, access to<br />

unlimited information, the information system of MIS management, investing on R&D, information bases updating,<br />

idea's expanding leadership and new products and productivity increase.<br />

It is used the 15 th edition of SPSS software and Cronbach’s for determining the question validity. Cronbach’s α<br />

coefficient was between 0 and 1. The 0 was the demonstrator of invalidity, and the 1was the demonstrator of the<br />

questions validity. The resultant which its validity was low, it was not be valid. However, the high validity degree<br />

was not the guarantee of appropriate measuring usage. According to Nunnally, the α coefficient of question's<br />

validity must be more than 0.7. The results of human capital scales, intellectual capital scales and structural capital<br />

scales analyzed according to Cronbach’s α test. Human capital alpha coefficient was 0.873, the structural capital α<br />

coefficient was 0.843 and the relative capital alpha coefficient was 0.823. Because all of these amounts are more<br />

than 0.7, it was concluded that the research questions were valid.<br />

Discussion<br />

To prove hypothesis of the model a multiple regression equation is created. For the H1 and H2, an equation can be<br />

written as follows:<br />

y � B0<br />

� B1<br />

X 1 � B2<br />

X 2<br />

In this formula Y was the demonstrator of company's market value, B0 was axis intersection, B1 was the human<br />

capital inclination, X1was the human capital average, B2 was relative capital inclination and X2 was the relative<br />

capital average. The analysis of present study was done based on X2 of 15 th edition of SPSS software. The results<br />

were as follow:<br />

In discussion about the first hypothesis: there is a significant relationship between human capital and market value<br />

of quoted companies in Tehran exchange, the analysis of this hypothesis showed that sig = 0.021< 0.05, so the<br />

rejection probability of 0 H was rejected and H1 hypothesis was accepted in the 95% assurance.<br />

Table I.Test of hypothesis first<br />

Residual<br />

-252.0<br />

-26.0<br />

278.0<br />

Chi-Square Test<br />

Frequencies<br />

Expected N Observed N<br />

260.0 8<br />

260.0 234<br />

260.0 538<br />

780<br />

Test Statistics<br />

option<br />

544.092<br />

2<br />

.021<br />

Chi-Square(a)<br />

df<br />

Asymp. Sig.<br />

3<br />

4<br />

5<br />

Total<br />

a 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 260.0.<br />

547


In discussion about the second hypothesis: there is a significant relationship between relation capital and market<br />

value of quoted companies in Tehran exchange, according to the results of data analysis sig= 0.009< 0.05, so the<br />

rejection probability of 0 H was rejected and H1 hypothesis was accepted in the 95% assurance.<br />

Table II. Test of hypothesis second<br />

Chi-Square Test<br />

Frequencies<br />

Test Statistics<br />

a 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 286.0.<br />

After the first and second hypotheses testing, it was computed human capital inclination, B 1 = 0.63, relation capital<br />

B = 0.74, and B 0 = -2.818. Finally the below formula was presented:<br />

inclination, 2<br />

Table III<br />

Residual<br />

-277.0<br />

-72.0<br />

349.0<br />

Standardized<br />

Coefficients<br />

Beta<br />

4.251<br />

5.243<br />

Expected N<br />

286.0<br />

286.0<br />

286.0<br />

option<br />

712.287<br />

2<br />

.009<br />

Std. Error<br />

Unstandardized<br />

Coefficients<br />

.000<br />

.000<br />

.000<br />

B<br />

-2.818<br />

1<br />

. 63<br />

. 74<br />

y � �2<br />

/ 818 � 0 / 63x<br />

� 0/<br />

74x<br />

(Constant)<br />

human<br />

capital<br />

relative<br />

capital<br />

In discussion about the third hypothesis: there is a significant relationship between intellectual capital and human<br />

capital, according to the results of data analysis sig= 0.019< 0.05, so the rejection probability of H 0 was rejected<br />

and H1 hypothesis was accepted in the 95% assurance.<br />

548<br />

Observed N<br />

9<br />

214<br />

635<br />

858<br />

Chi-Square(a)<br />

df<br />

Asymp. Sig.<br />

2<br />

3<br />

4<br />

5<br />

Total<br />

Mod<br />

e<br />

1


Table IV. Test of hypothesis third<br />

Chi-Square Test<br />

Frequencies<br />

Test Statistics<br />

a 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 156.0.<br />

In discussion about the case of last hypothesis: there is a significant relationship between intellectual capital and<br />

relation capital, according to the results of data analysis sig= 0.015< 0.05, so the rejection probability of H 0 was<br />

rejected and H1 hypothesis was accepted in the 95% assurance.<br />

Table V. Test of hypothesis last<br />

Conclusion<br />

Residual<br />

-140.0<br />

-32.0<br />

172.0<br />

Residual<br />

-138.0<br />

-24.0<br />

162.0<br />

Expected N<br />

156.0<br />

156.0<br />

156.0<br />

option<br />

321.846<br />

2<br />

.019<br />

Chi-Square Test<br />

Frequencies<br />

Observed N<br />

16<br />

124<br />

328<br />

468<br />

Chi-Square(a)<br />

df<br />

Asymp. Sig.<br />

Expected N Observed N<br />

156.0 18<br />

156.0 132<br />

156.0 318<br />

468<br />

Test Statistics<br />

option<br />

294.000<br />

2<br />

.015<br />

Chi-Square(a)<br />

df<br />

Asymp. Sig.<br />

3<br />

4<br />

5<br />

Total<br />

3<br />

4<br />

5<br />

Total<br />

a 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 156.0.<br />

The results of this study showed that the presented research model is meaningful in Iran industry. Based on the<br />

results of data analysis, it was concluded that the main results of the present study was the significant relationship<br />

between human capital and the relation capital of quoted companies in Tehran exchange with company's market<br />

value. Also, the structural capital of these companies is significantly related to human capital and relation capital.<br />

549


References<br />

Anvari Rostami & Rostami, (2003) "apprising the companies intellectual capital measurement and evaluation<br />

models "the Iranian accounting and auditing review,winter,vol,10.number34.pp51-75<br />

Bontis,N,Keow ,w.c.and Richardson,s. (2000), "Intellectual capital and business performance in Malaysian<br />

industries" . Journal of Intellectual capital,Vol.1No.1,pp.85-100<br />

Bontis,N(2003), "Intellectual capital disclosure in Canadian corporations", Journal of Human Resource Costing and<br />

Accounting,Vol.7 Nos 1/2, pp.9-20.<br />

Bozbura,F.T.(2004), "Measurment and application of intellectual capital in Turkey". The Learning Organization,<br />

Vol.11 NO.4/5,PP.357-367.<br />

Bontis, N. (2001), “Assessing knowledge assets: a review of the models used to measure intellectual capital”,<br />

International Journal of Management Reviews, Vol. 3 No. 1, pp. 41-60.<br />

Bontis, N. (2004), “National intellectual capital index: a United Nations initiative for the Arab region”, Journal of<br />

Intellectual Capital, Vol. 5 No. 1, pp. 13-39.<br />

Benefit, Deutsche Bank Research, Frankfurt am Main, available at: www. dbresearch.com .<br />

Bukh, P.N. (2003), “The relevance of intellectual capital disclosure: a paradox?”, Accounting, Auditing &<br />

Accountability Journal, Vol. 6 No. 1, pp. 49-56<br />

Bukh, P.H., Larsen, H.T. and Mouritsen, J. (2001), “Constructing intellectual capital statements”, Scandinavian<br />

Journal of Management, Vol. 17, pp. 87-108<br />

Bontis, N. (1999), “Managing organisational knowledge by diagnosing intellectual capital: framing and advancing<br />

the state of the field”, International of Journal of Technology Management, Vol. 18 No. 5, pp. 433-62<br />

Dzinkowski, R. (2000), “The measurement and management of intellectual capital: an introduction”, Management<br />

Accounting, Vol. 78 No. 2, pp. 32-6.<br />

Edvinsson, L. and Malone, M.S. (1997), Intellectual Capital: Realizing Your Company’s True Value by Finding its<br />

Hidden Brainpower, HarperBusiness, New York, NY.<br />

Hofman, J. (2005), Value Intangibles! Intangible Capital can and must be Valued Owners and Valuers alike will<br />

Kujansivu,P.and Lonnqvist,A.(2008), "Business process management as a tool for intellectual capital<br />

managerment". Knowledge and Process Management,Vol.15 NO.3,PP159-169.<br />

Lynn,B.E.(2000), "Intellectual capital: unearthing hidden value by managing intellectual assets". Ivey Business<br />

Journal, Vol. 64.N.3,PP.48-52.<br />

Mouritsen,J,Larsen,H.T,Bukh,P.N.D.(2001), "Intellectual Capital and the capable Firm": Narrating,Visualizing and<br />

Numbering for Managing Knowledge. Accounting Organizations and society,Vol.26,pp.735-762.<br />

Mouritsen,J(2003), "Overview Intellectual capital and the capital market": the circulability of intellectual<br />

capital,Accounting,Auditing and Accountability Journal, Vol.16 NO.1PP.18-30<br />

Miller,M.,DuPont,B.,Fera,V.,Jeffrey,R.,Mahon,B.,Payer,B.andStarr,A.(1999),“Measuring And reporting intellectual<br />

capital–from adiverse Canadian industry perspective , “ International Symposium Measuring and Reporting<br />

Intellectual Capital, 9-10 June<br />

Meritum (2002), Proyecto Meritum: Guidelines for Managing and Reporting on Intangibles, Madrid.<br />

Prusak, L. (1998), Working Knowledge: How Organizations Manage What They Know, Harvard Business School<br />

Press, Cambridge, MA<br />

Sieber,S.D.(1973),“Theintegrationoffieldworkandsurveymethods”, American Journal of Sociology, Vol. 78 No. 6,<br />

pp. 59 – 1335 .<br />

Stewart, T.A. (1997), Intellectual Capital, Nicholas Brealey, London.<br />

Sveiby, K. (1997), The New Organizational Wealth: Managing and Measuring Knowledge based Assets, Barret-<br />

Kohler, San Francisco, CA.<br />

Sveiby, K.E. (1997), The New Organisational Wealth: Managing and Measuring Knowledge Based Assets, Berret-<br />

Koehler, San Francisco, CA.<br />

550


551


EMPIRICAL FINANCE<br />

552


553


ANALYZING THE LING BETWEEN U.S. CREDIT DEFAULT SWAP SPREADS AND MARKET RISK:<br />

A 3-D COPULA FRAMEWORK<br />

Hayette Gatfaoui, Rouen Business School, France<br />

Email: hgt@rouenbs.fr, hgatfaoui@gmail.com<br />

Abstract. The recent mortgage subprime credit crisis, which burst in mid-2007, shed light on the heavy interconnections between<br />

credit markets and stock markets. Such turmoil generated a huge questioning of the prevailing Basel 2 requirements with respect to<br />

risk assessment. In this lens, we focus on the credit default swap market through Markit CDS indexes, and to study its linkages with<br />

the U.S. stock market. The linkages under investigation are envisioned at two levels among which a directional price channel and a<br />

volatility price channel. Furthermore, we assess the simultaneous interaction between the CDS market and the U.S. stock market<br />

through a three dimensional copula setting. Therefore, we account simultaneously for the asymmetric dependence structures as well as<br />

the differences between the dependence structures of CDS spreads with both market price and market volatility. Our study is relevant<br />

for risk monitoring and risk management prospects such as value-at-risk implementations (and other related scenario analysis).<br />

Keywords: Credit risk · Dependence measures · Market risk · Multivariate copulas· Risk management · Tail dependence<br />

JEL classification: C16 · C32 · D81<br />

1 Introduction<br />

The recent mortgage subprime crisis as well as the resulting global financial crisis shed light on the weaknesses and<br />

required enhancements of the prevailing risk management practices. Among the most important enforcements,<br />

liquidity concerns, counterparty credit risk, the correlation between various risks and model stress testing as well as<br />

related scenario analysis have been highlighted by the Basel Committee on Banking Supervision under the Basel 3<br />

framework. With regard to liquidity, various liquidity measures are proposed at both the level of financial assets and<br />

the bank level (i.e. in- and off-balance sheet prospects). On the correlation viewpoint, the risk of correlation between<br />

risks (e.g. impacts of liquidity risk on market risk and vice versa) refers to the linkages between asset classes, and<br />

between banks/financial institutions among others. On the stress testing and scenario viewpoints, the mitigation of<br />

potential model risk and measurement errors is targeted.<br />

Under the Basel 3 setting, we focus on the correlation risk between credit default swap (CDS) spreads and<br />

market risk components (Norden and Weber, 2009). Specifically, CDS spreads represent a credit risk proxy whereas<br />

market risk is envisioned with respect to two dimensions, namely a market price risk and a market volatility risk<br />

(Dupuis et al., 2009; Gatfaoui, 2010; Scheicher, 2009; Vaz de Melo Mendes and Kolev, 2008). The market price<br />

risk illustrates the impact of the global market trend (i.e. common general link within the stock market) whereas<br />

market volatility risk represents the magnitude of global market moves (i.e. volatility feedback, liquidity concerns).<br />

We study the asymmetric linkages between CDS spreads and both market price and market volatility risks through a<br />

three-dimension copula methodology.<br />

The previous setting targets a sound assessment of credit risk in the light of the stock market’s influence with<br />

respect to the curse of dimensionality, namely the trade-off between the number of parameters, the problem’s<br />

dimension and the sample size. The corresponding multivariate dependence structures exhibit a negative link<br />

between CDS spreads and market price risk, and a positive link between CDS spreads and market volatility risk.<br />

Taking into account simultaneously the dependence of CDS spreads relative to both market price and market<br />

volatility channels allows for a better assessment of the correlation risk between the credit and the stock market.<br />

2 Data and stylized features<br />

We introduce the data under consideration and corresponding statistical patterns.<br />

554


2.1 Data<br />

We consider two categories of data among which U.S. stock market indexes and credit default swap data focusing<br />

on both North America and emerging markets. Our daily data consist of closing quotes extracted from Reuters, and<br />

ranging at most from September 28 th 2007 to March 24 th 2010, namely 618 observations per data series. With<br />

regard to the first category of data, we consider the logarithmic returns of the Standard & Poor’s 500 stock market<br />

index in basis points and the level of the CBOE implied volatility index. Specifically, those two indexes are<br />

considered to be a proxy of the two complementary dimensions of market risk, namely the market price risk and the<br />

market volatility risk (see Gatfaoui, 2010). With regard to the second category of data, we consider the spreads of<br />

Markit credit default swap indexes, or equivalently, the spreads of credit derivatives indexes that we name Markit<br />

CDX spreads. Those CDX indexes are split into two groups among which one set of spreads focuses on reference<br />

entities domiciled in North America and the other one relates to reference entities domiciled in emerging markets<br />

(see table 1). In particular, the CDXEM index focuses on sovereign entities whereas the CDXED relates to corporate<br />

and sovereign entities. Moreover, the crossover index accounts for potential rating divergences between Standard &<br />

Poor’s and Moody’s rating agencies. Furthermore, the CDX spreads 1 under consideration are expressed in basis<br />

point and consist of the mid-market quotes on individual issuers. Incidentally, CDXEM spread data range from<br />

February 1 st 2008 to March 24 th 2010, namely 538 observations per data series.<br />

2.2 Stylized features<br />

CDS label Detail about reference entities and indices<br />

CDXEM Emerging Market<br />

CDXED Emerging Market Diversified<br />

CDXHY North America Investment Grade High Yield<br />

CDXHB North America Investment Grade High Yield and B-rated<br />

CDXBB North America Investment Grade High Yield and BB-rated<br />

CDXIG North America Investment Grade<br />

CDXIV North America Investment Grade High Volatility<br />

CDXXO North America Crossover<br />

SP500 Standard & Poor’s 500 stock index<br />

VIX CBOE Implied Volatility Index<br />

Table 1: Markit CDS indexes and stock market indices<br />

We focus on the link prevailing between CDX spreads changes on one side, and changes in both SP500 returns as<br />

well as VIX level. For this prospect, we first control for an existing link based on the non parametric Kendall and<br />

Spearman correlation coefficients (see table 2). 2<br />

Spread<br />

Kendall correlation with<br />

Spearman correlation with<br />

SP500 VIX SP500 VIX<br />

CDXEM -0.2676 0.4200 -0.3634 0.5644<br />

CDXED -0.0916 0.2191 -0.1329 0.3114<br />

CDXHY -0.2423 0.4077 -0.3369 0.5627<br />

CDXHB -0.1382 0.3038 -0.1949 0.4246<br />

CDXBB -0.1409 0.2861 -0.1999 0.4042<br />

CDXIG -0.2887 0.4196 -0.4072 0.5779<br />

1<br />

The spreads are computed against corresponding LIBOR rates. The reader is invited to consult Markit Corporation’s website at<br />

http://www.markit.com for further information.<br />

2<br />

CDS spread changes as well as market indexes exhibit a skewness and a positive excess kurtosis, underlining then their asymmetric behavior<br />

over time.<br />

555


CDXIV -0.2107 0.3484 -0.2975 0.4887<br />

CDXXO -0.1772 0.2749 -0.2584 0.3912<br />

Table 2: Kendall and Spearman correlations between CDX spread changes and changes in both SP500 and VIX<br />

The obtained correlation estimates emphasize the significance of the correlation between the CDS market and the<br />

U.S. stock market. As expected, the link between CDS spreads and market price is negative whereas the link<br />

between CDS spread and market volatility is positive. Such a pattern illustrates the well-known volatility feedback<br />

pattern, which was formerly introduced by Black (1976).<br />

Focusing on the dependencies between CDX spreads and market indexes, we then investigate graphically the<br />

existence of such links. For this prospect, we plot the CDX spread changes against changes in SP500 index on one<br />

side, and changes in VIX implied volatility index on the other side (see figure 1 for example). The plots exhibit<br />

clearly linkages between CDX spreads and both market price and market volatility. However, such linkages are<br />

asymmetric. Moreover, we notice clearly differences between the dependence structure of CDX spread changes with<br />

respect to SP500 changes, and the dependence structure with respect to VIX changes.<br />

Figure 1: Dependence structures of CDXHY spreads with both SP500 and VIX indexes<br />

As a result, there exist negative linkages between CDX spread changes and SP500 return changes, and positive<br />

linkages between CDX spread changes and VIX changes. Such linkages reveal to be asymmetric and the two types<br />

of dependence structures of CDX spread changes relative to SP500 return changes and VIX changes respectively<br />

exhibit noticeable differences.<br />

3 A Multivariate Copula Application<br />

The previous stylized facts advocate the use of an appropriate statistical tool to handle simultaneously the<br />

dependence structure between the CDS market and the two components of market risk, namely market price and<br />

market volatility risks. For this purpose, we introduce the three-dimension copulas under consideration, the<br />

corresponding data fitting process and the selection criterion of the best copula model.<br />

3.1 Copulas<br />

Copulas are a useful tool to model multivariate dependence structures (Cherubini et al., 2004; Durrleman et al.,<br />

2000; Embrechts et al., 2003; Genest et al., 1995; Joe, 1997; McNeil et al., 2005; Nelsen, 1999; Sklar, 1973). They<br />

present the advantage of not necessarily having to determine the distribution function of each of the variable under<br />

consideration. Hence, it is possible to specify the global dependence structure without knowing the marginals (i.e.<br />

univariate distribution function) of each variable under consideration. As a consequence, the corresponding model<br />

risk is minimized. As an example, figure 2 plots the empirical copula function which describes the bivariate<br />

dependence structure of CDXHY spread changes with respect to SP500 changes on one side, and VIX changes on<br />

the other side. The observed empirical behavior can easily be linked to the theoretical behavior of some well-known<br />

copula representations (Cherubini et al., 2004; Joe, 1997; Nelsen, 1999).<br />

556


We focus on the three-dimension copula representations of the dependence structures of CDX spread changes<br />

and the changes in the two market risk channels. Under such a setting, we face the well-known curse of<br />

dimensionality, which represents the trade-off between the dimension of our setting (i.e. a three-dimension setting),<br />

the number of parameters for each considered copula representation, and finally the number of available data points.<br />

Given that statistics often advocate parsimonious models, we’ll focus on a specific set of Archimedean and Elliptical<br />

copulas (see table 3). In particular, the Frank and Gaussian copulas exhibit non tail dependence, namely no link<br />

between the variable’s extreme values. However, the Student T copula exhibits a symmetric left- and right-tail<br />

dependence. Differently, the remaining copulas exhibit asymmetric tail dependences. In particular, the Clayton<br />

copula exhibits a lower tail dependence whereas the Gumbel copula exhibits an upper tail dependence.<br />

Figure 2: Empirical bivariate copula functions of CDXHY spread changes<br />

Copula<br />

Clayton<br />

Attribute Parameters<br />

Frank<br />

Gumbel<br />

Archimedean Correlation �<br />

Gaussian<br />

Student T<br />

Elliptical<br />

Correlation matrix<br />

Degree of freedom �, Correlation matrix<br />

Table 3: Three-dimension copulas and characteristics<br />

Each of the three dimensions of our multivariate copula framework relates respectively to CDX spread changes,<br />

SP500 return changes and finally VIX changes from one day to another. This way, the relationship between CDX<br />

spreads and the two market dimensions are simultaneously accounted for. For any positive correlation parameter �<br />

and u1, u2, u3 in [0,1], the Clayton copula writes:<br />

C<br />

�u , u , u ; �<br />

1<br />

2<br />

3 � �<br />

1<br />

� � �<br />

��<br />

��<br />

��<br />

u � u � u � 2<br />

For any correlation matrix � and u1, u2, u3 in [0,1], the Gaussian copula writes:<br />

C<br />

1<br />

� � � � �<br />

� �<br />

� t �1<br />

u , u , u ; � � exp � � � � 1�<br />

1<br />

2<br />

3<br />

�<br />

1<br />

1<br />

2<br />

where � and � -1 are a three-dimension matrix and its inverse respectively, |�| is the determinant of the correlation<br />

matrix, � is the vector of the inverse standard univariate Gaussian cumulative distribution function applied to each<br />

element u1, u2, u3, and finally � t is the transposed vector of �. We also introduce a three-dimension vector 1 of unit<br />

numbers.<br />

557<br />

2<br />

�<br />

�<br />

1<br />

1<br />

2<br />

3


For any positive correlation parameter � and u1, u2, u3 in [0,1], the Frank copula writes:<br />

C<br />

��<br />

��u1<br />

��u2<br />

��u3<br />

1<br />

� � �e �1��<br />

e �1��<br />

e �1���<br />

u1,<br />

u2,<br />

u3;<br />

� � � ln�1<br />

�<br />

�<br />

��<br />

2<br />

� ��<br />

�e �1�<br />

�� For any positive correlation parameter � and u1, u2, u3 in [0,1], the Gumbel copula writes:<br />

C<br />

�<br />

�<br />

�<br />

1<br />

� �<br />

� � �<br />

1<br />

2<br />

3 �<br />

�<br />

�<br />

�<br />

�u , u , u ; � � � exp � �� lnu<br />

� � �� lnu<br />

� � �� lnu<br />

�<br />

1<br />

2<br />

3<br />

For any correlation matrix �, degree of freedom � and u1, u2, u3 in [0,1], the Student T copula writes:<br />

�� � 3��<br />

��<br />

��<br />

� 1 t �1<br />

�<br />

��<br />

����<br />

��<br />

�1<br />

� � � � �<br />

1 � 2 � �<br />

�<br />

� �<br />

� � 2 ��<br />

�<br />

C u1,<br />

u2,<br />

u3;<br />

�,<br />

� � 1<br />

�<br />

2<br />

3<br />

�<br />

3<br />

2<br />

� �� � 3��<br />

��<br />

� � �n<br />

�<br />

���<br />

��<br />

��<br />

����<br />

1�<br />

��<br />

� � 2 ��<br />

� 2 � n�1<br />

� � �<br />

where � and � -1 are a three-dimension matrix and its inverse respectively, |�| is the determinant of the correlation<br />

matrix, � is the Gamma function, � is the vector (�1, �2, �3) of the inverse univariate Student 3 cumulative distribution<br />

function applied to each element u1, u2, u3, and finally � t is the transposed vector of �.<br />

3.2 Estimation and selection<br />

We first estimate the copula parameters while running a maximum likelihood estimation methodology (MLE).<br />

However, we correct for possible parameters’ uncertainty while applying a parametric bootstrapping technique in<br />

order to conform to the related MLE asymptotics (i.e. bootstrap MLE; Chen and Fan, 2006; Chernick, 1999;<br />

Davison and Hinkley, 2006; Efron, 1979; Simon, 1997; Varian, 2005). The parametric bootstrap, which is also a<br />

resampling method, allows for assigning an accuracy measure to parameter estimates. Indeed, parameter uncertainty<br />

usually yields the under- or overestimation of model parameters. Correcting for uncertainty and sticking to MLE<br />

assumptions allow therefore for getting more accurate estimates and therefore sounder risk assessment. Then, our<br />

selection process of the most appropriate copula representation relies on the information criterion principle (i.e.<br />

selection tool). In particular, we consider the Akaike, Schwarz and Hannan-Quinn information criteria. Those<br />

information criteria encompass two components, which are the forecast error committed by the model and number<br />

of estimated unconstrained parameters (Akaike, 1974; Lütkepohl, 2006; Hannan and Quinn, 1979; Schwarz, 1978).<br />

The model selection rule requires minimizing the information criterion. By doing so, the selection process targets an<br />

accurate and parsimonious model (i.e. reducing the potential errors and misestimation problems).<br />

The negative Kendall correlation between CDX spreads and SP500 return changes is incompatible with the<br />

Clayton copula representation. Moreover, the obtained parameter estimates for Frank copula are also incompatible<br />

with the corresponding theoretical specification. As a result, we display only the chosen information criteria for the<br />

remaining copulas (see tables 4, 5 and 6). Amongst the range of representations under consideration, the best copula<br />

or the optimal three-dimension copula estimation is that one which minimizes at least one (when not all) the<br />

information criteria previously mentioned, namely Akaike, Schwarz and Hannan-Quinn information criteria.<br />

According to tables 4 to 6, the optimal copula representation consists of the Student T copula for all CDX spreads<br />

under consideration, which implies a symmetric tail dependence of CDX spreads with respect to market risk<br />

channels.<br />

3 This is a Student distribution with � degree(s) of freedom.<br />

558<br />

3<br />

� �3<br />

�<br />

2<br />

�1<br />

�<br />

2


Spread<br />

Information criterion<br />

Akaike Schwarz Hannan-Quinn<br />

CDXEM 2.00747665 6.28599811 3.67664928<br />

CDXED 2.00650408 6.42486904 3.72035251<br />

CDXHY 2.00650411 6.42486907 3.72035253<br />

CDXHB 2.00650411 6.42486907 3.72035254<br />

CDXBB 2.00650407 6.42486903 3.72035250<br />

CDXIG 2.00650409 6.42486905 3.72035252<br />

CDXIV 2.00650410 6.42486906 3.72035253<br />

CDXXO 2.00650415 6.42486911 3.72035258<br />

Spread<br />

Table 4: Information criteria for Gumbel copula estimation<br />

Information criterion<br />

Akaike Schwarz Hannan-Quinn<br />

CDXEM -496.82 -484.01 -491.84<br />

CDXED -409.29 -396.05 -404.16<br />

CDXHY -592.34 -579.11 -587.22<br />

CDXHB -472.05 -458.82 -466.93<br />

CDXBB -444.55 -431.31 -439.43<br />

CDXIG -616.53 -603.30 -611.41<br />

CDXIV -517.10 -503.86 -511.98<br />

CDXXO -429.38 -416.15 -424.26<br />

Spread<br />

Table 5: Information criteria for the Gaussian copula estimation<br />

Information criterion<br />

Akaike Schwarz Hannan-Quinn<br />

CDXEM -709.90 -692.83 -703.27<br />

CDXED -505.90 -488.27 -505.90<br />

CDXHY -704.44 -686.81 -704.44<br />

CDXHB -578.60 -560.97 -578.60<br />

CDXBB -517.75 -500.11 -517.75<br />

CDXIG -735.21 -717.57 -735.21<br />

CDXIV -604.00 -586.37 -604.00<br />

CDXXO -486.81 -469.17 -486.81<br />

Table 6: Information criteria for the Student T copula estimation<br />

Further, table 7 displays the corresponding Student T parameter estimates, namely the elements of the<br />

correlation matrix � and the number � of degrees of freedom.<br />

559


Spread<br />

Correlation with Correlation between Degree of<br />

SP500 VIX SP500 and VIX freedom<br />

CDXEM -0.1185 0.3448 -0.6216 3<br />

CDXED 0.0116 0.2182 -0.6072 4<br />

CDXHY -0.2580 0.5142 -0.6223 4<br />

CDXHB -0.1185 0.3448 -0.6216 3<br />

CDXBB -0.0589 0.2280 -0.5775 4<br />

CDXIG -0.3466 0.5648 -0.6061 3<br />

CDXIV -0.2893 0.4166 -0.5967 3<br />

CDXXO -0.1655 0.2379 -0.6068 5<br />

Table 7: Parameter estimates of the three-dimension Student T copula<br />

Apart from CDXED spreads, results conform to empirical facts so that:<br />

1. the correlation between the changes in CDX spreads and SP500 returns is negative,<br />

2. the correlation between the changes in CDX spreads and VIX levels is positive,<br />

3. the correlation between the changes in SP500 returns and VIX levels is negative.<br />

The positive correlation between the changes in CDXED spreads and SP500 returns is probably due to the curse<br />

of dimensionality concern. Indeed, we have more than 100 data points less as compared to the other CDX spread<br />

time series. We hope to solve this issue while completing our time series as soon as possible. Finally, the obtained<br />

correlation matrix elements are slightly different from the previous Kendall correlation estimates. Indeed, the<br />

average differences between the copula-based correlation and the Kendall counterparts are 0.0267 and 0.0237 with<br />

respect to SP500 returns and VIX levels. In the same way, the average absolute differences between those two types<br />

of correlation estimates are 0.0648 and 0.0665 with respect to SP500 returns and VIX levels.<br />

4 Conclusion<br />

In this paper, we focused on the dependence structures between CDX spread changes on one side, and changes in<br />

both SP500 returns and VIX index on the other side. We empirically exhibited the asymmetric nature of each<br />

bivariate dependence structure, namely de dependence structures between CDX spreads and SP500 returns and<br />

between the CDX spreads and VIX index. By the way, we also emphasized the differences between those two types<br />

of bivariate dependence structures, which we handled simultaneously within a three-dimension copula analysis.<br />

Balancing the curse of dimensionality with a parsimonious modeling framework, we selected three Archimedean<br />

copulas and two classic elliptical copulas in order to test for various tail dependencies.<br />

The estimation process and the selective information criterion statistics exhibited the Student T dependence<br />

structure as an optimal three-dimension copula representation. Therefore, we have to cope with symmetric tail<br />

dependencies between CDX spreads and the two market risk channels above-mentioned. Additionally, we are<br />

therefore able to realize a more accurate and global credit risk scenario analysis in the light of both market trend and<br />

market volatility levels. Hence, the three-dimension copula framework is a useful tool for risk<br />

monitoring/management and risk reporting prospects under Basel 3. Moreover, the natural extension of our study<br />

consists of a scenario analysis describing the impact of the market risk channels on the evolution of CDS spreads.<br />

Such a framework is useful for value-at-risk or even stressed value-at-risk implementations as well as related<br />

scenario analysis.<br />

560


5 References<br />

Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control<br />

19(6), 716–723.<br />

Black, F. (1976). Studies of stock price volatility changes. Proceedings of the 1976 Meetingsof the American<br />

Statistical Association, Business and Economical Statistics Section, 177–181.<br />

Chen, X., & Fan, Y. (2006). Estimation of copula-based semiparametric time series models. Journal of<br />

Econometrics 130(2), 307–335.<br />

Chernick, M. R. (1999). Bootstrap Methods, A practitioner's guide. Wiley Series in Probability and Statistics.<br />

Cherubini, U., Luciano, E., & Vecchiato, W. (2004 ). Copula Methods in Finance. Chichester: Wiley.<br />

Davison, A. C., & Hinkley, D. (2006). Bootstrap Methods and their Application. 8 th edition, Cambridge: Cambridge<br />

Series in Statistical and Probabilistic Mathematics.<br />

Dupuis, D., Jacquier, E., Papageorgiou, N., & Rémillard, B. (2009). Empirical evidence on the dependence of credit<br />

default swaps and equity prices. Journal of Futures Market 29(8), 695-712.<br />

Durrleman, V., Nikeghbali, A., & Roncalli, T. (2000 ). Which copula is the right one? Technical Report,<br />

Operational Research Group of Crédit Lyonnais, Paris.<br />

Efron, B. (1979). Bootstrap Methods: Another Look at the Jackknife. Annals of Statistics 7(1), 1-26.<br />

Embrechts, P., Lindskog, F., & McNeil, A.J. (2003 ). Modeling dependence with copulas and applications to risk<br />

management. In S. Rachev (Ed.), Handbook of Heavy Tailed Distributions in Finance (pp. 329–384). North-<br />

Holland: Elsevier.<br />

Gatfaoui, H. (2010). Investigating the dependence structure between credit default swap spreads and the U.S.<br />

financial market. Annals of Finance, 6(4), 511-535.<br />

Genest, C., Ghoudi, K., & Rivets, L.-P. (1995 ). A semi-parametric estimation procedure of dependence parameters<br />

in multivariate families of distributions. Biometrika 82(3), 543–552.<br />

Hannan, E. J., & B. G. Quinn (1979). The determination of the order of an autoregression. Journal of the Royal<br />

Statistical Society B 41, 190–195.<br />

Joe, H. (1997). Multivariate Models and Dependence Concepts. Monographs on Statistics and Applied Probability,<br />

Vol.73. London: Chapmann & Hall.<br />

Lütkepohl, H. (2006). Palgrave Handbook of Econometrics. <strong>Volume</strong> 1: Econometric Theory, Chapter Vector<br />

Autoregressive Models, pp. 477–510. Houndmills: Palgrave Macmillan.<br />

McNeil, A., Frey, R., & Embrechts, P. (2005 ). Quantitative Risk Management: Concepts, Techniques and Tools.<br />

Princeton: Princeton University Press.<br />

Nelsen, R.B. (1999 ). An Introduction to Copulas. Lectures Notes in Statistics, 139. New York: Springer.<br />

Norden, L., & Weber, M. (2009). The co-movement of credit default swap, bond and stock markets: An empirical<br />

analysis. European Financial Management 15(3), 529-562.<br />

Scheicher, M. (2009). The correlation of a firm’s credit spread with its stock price: Evidence form credit default<br />

swaps. In G. N. Gregoriou (Ed.), Stock Market Volatility (Chap. 21, pp. 405-419). London: Chapman &<br />

Hall/CRC Finance.<br />

Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics 6(2), 461–464.<br />

Simon, J. L. (1997). Resampling: The New Statistics. Resampling Stats.<br />

Sklar, A. (1973 ). Random variables, joint distribution functions and copulas. Kybernetika 9(6), 449–460.<br />

Varian, H. (2005). Bootstrap tutorial. Mathematica Journal 9(4), 768-775.<br />

Vaz de Melo Mendes, B., & Kolev, N. (2008). How long memory in volatility affects true dependence structure.<br />

International Review of Financial Analysis 17(5), 1070-1086.<br />

561


MODELING OF LINKAGES BETWEEN STOCK MARKETS INCLUDING THE EXCHANGE RATE<br />

DYNAMICS<br />

Malgorzata Doman, Associate Professor, Poznan University of Economics, Department of Applied Mathematics<br />

Al. Niepodleglosci 10, Poznań, Poland, E-mail:malgorzata.doman@ue.poznan.pl<br />

Ryszard Doman, Associate Professor, Adam Mickiewicz University in Poznan, Faculty of Mathematics and Computer Science,<br />

Umultowska 87, 61-614 Poznan, Poland, E-mail: rydoman@amu.edu.pl<br />

Abstract. The analysis of linkages between national stock markets is usually based on models describing dependencies between returns<br />

on stocks or indices. Some papers concerning the subject present results for pure quotations and the other consider the stock<br />

market data denominated in one currency (usually in the US dollar). In the paper, we address the question of how the introducing of<br />

the exchange rate dynamics into a model affects the dependence analysis. We apply and compare two different ways of tackling the<br />

problem. The first one consists in denominating the analyzed quotations in one currency (USD or euro). The second deals with a direct<br />

introducing the exchange rate into a model for the dependence structure. Our analysis is based on the return series on selected stock<br />

indices from the period 1995-2010. To describe the dependence structure, we apply dynamic copulas. Such approach allows us to separate<br />

the dynamics of dependence from the volatility dynamics.<br />

1. Introduction<br />

The knowledge about linkages between stock markets is of importance in risk management and building investment<br />

strategies. Moreover, it is crucial for understanding the nature of global financial market. So, it is quite natural that<br />

there exist many papers dealing with this problem. Most of them belong to the contagion literature. The most popular<br />

approach here is to denominate the indices (or other stock market quotations) in local currencies (Eun and Shin<br />

1989, Koutmos 1992, Theodossiou and Lee 1993, Wong et al. 2004). The next popular choice is denomination in the<br />

US dollar (e.g. Karolyi and Stulz 1996, Rodriguez 2007)). There exist analyses performed both in a local currency<br />

and the US dollar (e.g. Lee et al. 2001). Chen and Poon (2007) use local currency for indices in the case of developed<br />

markets and for emerging market they use US dollar denominated indices. Veiga and McAleer (2004) remarked<br />

that the use of the US dollar as a common currency is a complicating factor. This is because in such situation<br />

the US market is always included in the empirical analysis. Changes in the US dollar are largely influenced by<br />

changes in US fundamentals, which also drive financial returns. Thus, it is likely that some of the co-movements<br />

observed among returns in different markets expressed in a common currency are caused by changes in the fundamentals<br />

driving the US dollar exchange rate. However, their results (Veiga and McAleer 2004) based on quite extensive<br />

analysis of the sensitivity of spillover effects on denomination show that the denomination has no significant<br />

impact on the results.<br />

In the paper, we ask how introducing the exchange rate dynamics influences the dynamics of linkages between<br />

stock indices. We consider dependencies between the S&P500 index and two European indices, the DAX and the<br />

WIG20 (the main index of the Warsaw Stock Exchange). The analysis of linkages is performed by means of a DCCcopula<br />

model. We estimate dynamic copula correlations between the indices denominated in local currencies and in<br />

chosen alternative currencies. The aim of the presented investigation is to analyze the sensitivity of the dynamic<br />

copula correlation estimates to the denomination of indices in alternative currencies. In the case of the S&P500 and<br />

the DAX the considered currencies are the US dollar and the euro. The analysis for the S&P500 and the WIG20<br />

includes denomination in the US dollar, the euro and the Polish zloty. Moreover, for both pairs of the indices we<br />

calculate the dynamic copula correlations based on a three-dimensional DCC-copula model estimated jointly for the<br />

indices denominated in local currencies and the corresponding exchange rate (USD/EUR in the case of SP500-DAX<br />

and USD/PLN for SP500-WIG20).<br />

2. DCC-copula models<br />

Modeling the dependencies between financial returns is a difficult task because of special properties of these series.<br />

Typical return series usually exhibit conditional heteroskedasticity, different types of asymmetries, and structural<br />

breaks which strongly influence estimation results for models of the dependence structure. Moreover, the dynamics<br />

of dependencies significantly changes in time. For example, it is well documented in many studies that dependence<br />

between returns on different assets is usually stronger in bear markets than in bull markets (Ang and Bekaert 2002,<br />

Ang and Chen 2002, Patton 2004). This example of asymmetric dependence in financial markets is of great impor-<br />

562


tance for portfolio choice and risk management. The main problem connected with this phenomenon is, however,<br />

that from the theoretical point of view, the mentioned asymmetry cannot be produced by a statistical model for the<br />

returns that assumes an elliptical multivariate conditional distribution, and thus applying the linear correlation is not<br />

justified. An alternative concept that allows for modeling the dependence in general situation is copula. Roughly<br />

speaking, a d-dimensional copula is a mapping : [ 0,<br />

1]<br />

� [ 0,<br />

1]<br />

d<br />

C from the unit hypercube into the unit interval<br />

which is a distribution function with standard uniform marginal distributions.<br />

Assume that X � X , �,<br />

X ) is a d-dimensional random vector with joint distribution F and marginal distri-<br />

butions i<br />

( 1 d<br />

F , i , d , � 1 � . Then, by a theorem by Sklar (1959), F can be written as<br />

F x , �, x ) � C(<br />

F ( x ), �,<br />

F ( x )) . (1)<br />

( 1 d<br />

1 1 d d<br />

The function C is unique if Fi are continuous. Otherwise, C is uniquely given by<br />

C(<br />

u , � , u<br />

( �1)<br />

( �1)<br />

) � F(<br />

F ( u ), �,<br />

F ( u )) , (2)<br />

1 d<br />

1 1 d d<br />

�1<br />

for u i �[<br />

0,<br />

1]<br />

, where Fi ( u)<br />

� inf{ x : Fi<br />

( x)<br />

� ui}<br />

. In that case, C is called the copula of F or of X. Since the marginals<br />

and the dependence structure can be separated, it makes sense to interpret C as the dependence structure of<br />

the vector X. We refer to Patton (2009) and references therein for an overview of financial time series applications<br />

of copulas. There one can also find more information about advantages and limitations of copula-based modeling.<br />

�<br />

The simplest copula is defined by C ( u1<br />

, �, ud<br />

) � u1<br />

��<br />

� ud<br />

, and it corresponds to independence of marginal<br />

�<br />

distributions. The next two important examples are C u , �, u ) � min( u , �,<br />

u ) , and, in the two-dimensional<br />

( 1 d<br />

1 d<br />

�<br />

case, C ( u , u ) � max( u � u �1,<br />

0)<br />

. The first corresponds to comonotonicity or perfect dependence (one variable<br />

i<br />

j<br />

i<br />

j<br />

can be transformed almost surely into another by means of an increasing map), and the second, to countermonotonicity<br />

or perfect negative dependence of the variables X i and X j (one variable can be transformed almost surely<br />

into another by means of a decreasing map). In the empirical part of this paper we will use the Student t copula. It is<br />

defined as follows:<br />

St<br />

d �1<br />

�1<br />

C�<br />

, R ( u1,<br />

�, u d ) � t�<br />

, R ( t�<br />

( u1<br />

), �,<br />

t�<br />

( u d )) , (3)<br />

t denotes the d-dimensional Student’s t distribution with � degrees of freedom and correlation matrix R,<br />

where � , R<br />

and t� stands for 1-dimensional Student’s t distribution with � degrees of freedom. In the bivariate case we will use<br />

the notation<br />

St<br />

C� , � , where � stands for correlation coefficient.<br />

The density associated to an absolutely continuous copula C is a function c defined by<br />

c<br />

� C(<br />

u , �,<br />

u )<br />

d<br />

( u1,<br />

�,<br />

ud<br />

) � 1 d<br />

�u1��u<br />

d<br />

. (4)<br />

For an absolutely continuous random vector, the copula density c is related to its joint density function h by the<br />

following canonical representation:<br />

f x , � x ) � c(<br />

F ( x ), �,<br />

F ( x )) f ( x ) � f ( x ) , (5)<br />

( 1 d<br />

1 1 d d 1 1 d d<br />

where d F F , , 1 � are the marginal distributions, and f 1, , f d<br />

� are the marginal density functions.<br />

In the case of non-elliptical distributions, measures of dependence that are more appropriate than the linear correlation<br />

coefficient are provided by two important copula-based tools known as Kendall’s tau and Spearman’s rho<br />

(Embrechts et al. 2002). Since the dynamics of Kendall’s tau can be easily derived for the results presented in this<br />

paper, we recall suitable definitions. If ( X , Y ) is a random vector and )<br />

~<br />

( X<br />

~<br />

, Y is an independent copy of ( X , Y ) then<br />

Kendall’s tau for ( X , Y ) is defined as<br />

� ( X , Y ) � P{(<br />

X � X<br />

~<br />

)( Y �Y<br />

~<br />

) � 0}<br />

� P{(<br />

X � X<br />

~<br />

)( Y � Y<br />

~<br />

) � 0}.<br />

(6)<br />

Thus Kendall’s tau for ( X , Y ) is the probability of concordance minus the probability of discordance. If<br />

( X , Y ) is a vector of continuous random variables with copula C, then<br />

For the Student t copula<br />

� ( X , Y ) � 4��<br />

C(<br />

u,<br />

v)<br />

dC(<br />

u,<br />

v)<br />

�1<br />

. (7)<br />

2<br />

[ 0,<br />

1]<br />

2<br />

�<br />

St<br />

C� , � , Kendall’s tau equals arcsin( �)<br />

563<br />

.


A very important concept connected with copula, relevant to dependence in extreme values, is tail dependence<br />

(Nelsen 2006). If X and Y are random variables with distribution functions F and G then the coefficient of upper tail<br />

dependence is defined as follows<br />

�1<br />

�1<br />

� lim P(<br />

Y � G ( q)<br />

| X � F ( q))<br />

, (8)<br />

U � �<br />

q�1<br />

provided a limit � �[<br />

0,<br />

1]<br />

exists. Analogously, the coefficient of lower tail dependence is defined as<br />

U<br />

�1<br />

�1<br />

� lim P(<br />

Y � G ( q)<br />

| X � F ( q))<br />

, (9)<br />

L � �<br />

q�0<br />

provided that a limit �L �[<br />

0,<br />

1]<br />

exists. If � U �(<br />

0,<br />

1]<br />

( � L �(<br />

0,<br />

1]<br />

), then X and Y are said to exhibit upper (lower) tail<br />

dependence. Upper (lower) tail dependence quantifies the likelihood to observe a large (low) value of<br />

large (low) value of X. The coefficients of tail dependence depend only on the copula C of X and Y:<br />

Y given a<br />

C(<br />

q,<br />

q)<br />

� L � lim � ,<br />

q�<br />

0 q<br />

Cˆ<br />

( q,<br />

q)<br />

� U � lim �<br />

q�0<br />

q<br />

(10)<br />

where Cˆ ( u,<br />

v)<br />

� u � v �1�<br />

C(<br />

1�<br />

u,<br />

1�<br />

v)<br />

. For the Student t copula<br />

dence are both equal to 2 1�� ( � �1)(<br />

1�<br />

�)<br />

/( 1�<br />

�)<br />

�<br />

t (see McNeil et al. 2005).<br />

� �<br />

St<br />

C� , � , the coefficients of upper and lower depen-<br />

Introduced by Patton (2004), the notion of conditional copula allows to apply copulas to modeling the joint distribution<br />

of t r conditional on information set �t �1<br />

, where r � ( 1,<br />

, , , ) �<br />

t r t � rd<br />

t is a d-dimensional vector of financial<br />

returns. In this paper we consider the following general conditional copula model<br />

r ~ F ( � | � ), � , r | � ~ F ( � | � ) , (11)<br />

1, t | � t�1<br />

1,<br />

t t�1<br />

d , t t�1<br />

d , t t�1<br />

t | � t�1<br />

~ Ft<br />

( � | � t �1<br />

)<br />

t ( rt<br />

| � t�1<br />

) � Ct<br />

( F1,<br />

t ( r1,<br />

t | � t�1<br />

), , Fd<br />

, t ( rd<br />

, t | � t�1<br />

) | � t�1<br />

r , (12)<br />

F � )<br />

(13)<br />

where the set t � includes the up to time t information on the returns on both considered financial assets, and Ct is<br />

the conditional copula linking the marginal conditional distributions. Further, we assume that<br />

r � μ � y , μ E r | � ) , (14)<br />

t t t<br />

y i,<br />

t � i,<br />

t�<br />

i,<br />

t<br />

t � ( t t�1<br />

2<br />

i,<br />

t � var( ri<br />

, t | �t<br />

�1<br />

� , � ) , (15)<br />

� ~ iid Skew_<br />

t(<br />

0,<br />

1,<br />

� , � ) , (16)<br />

i,<br />

t<br />

i i<br />

where Skew _ t(<br />

0,<br />

1,<br />

�,<br />

�)<br />

denotes the standardized skewed Student t distribution with � � 2 degrees of freedom, and<br />

skewness coefficient � 0<br />

r , , i � 1, �,<br />

d , we fit<br />

� (Lambert and Laurent 2001). To the marginal return series i t<br />

ARMA-GARCH models with skewed Student’s t distributions for the 1-dimensional innovations.<br />

When modeling the joint conditional distribution, the evolution of the conditional copula Ct has to be specified.<br />

Usually (Patton 2004, 2006), the functional form of the conditional copula is fixed, but its parameters evolve<br />

through time. In this paper, we follow that approach and apply the DCC model proposed by Engle (2002), extended<br />

to Student t copulas. Thus in our DCC-t-copula model we assume that the conditional copula Ct is a Student t Copu-<br />

la<br />

t<br />

C� , R t<br />

such that<br />

�1<br />

2<br />

�1<br />

2<br />

�diag�Q ��<br />

Q �diag�Q ��<br />

R t � t t<br />

t , (17)<br />

1 1<br />

1<br />

~ ~<br />

Q t � ( 1 � � � � ) Q � � y � t�<br />

y t�<br />

� � Q t�<br />

, (18)<br />

where � � 0 , � � 0 , � � � � 1 , ~ yi<br />

, t<br />

�1<br />

� t�<br />

( y i,<br />

t ) , i � 1, �,<br />

d , and Q is the unconditional covariance matrix of t y~ .<br />

3. The Data<br />

In the paper we present results of analysis concerning the dependencies between the S&P500 and two European<br />

indices, the DAX and the WIG20 (the main index of the Warsaw Stock Exchange). We choose the indices from two<br />

neighbor countries which differ in the level of development. As it was mentioned in Introduction, the very common<br />

approach in stock market linkages analysis is to investigate the indices of developed markets in local currency, and<br />

that from emerging market, denominated in an alternative currency (mostly the US dollar). Our dataset contains the<br />

quotations of the considered indices and the exchange rates EUR/ USD and USD/PLN. The quotation series were<br />

obtained from the service Stooq. The period under scrutiny is from January 3, 1995 to December 11, 2009.<br />

564


Since the patterns of non-trading days in national stock markets differ, for the purpose of modeling dependencies<br />

the dates of observations for each pair of indices were checked and observations not corresponding to ones in<br />

the other index quotation series were removed. The time series under scrutiny are percentage logarithmic daily returns<br />

calculated by the formula<br />

rt � 100(ln Pt<br />

�ln<br />

Pt<br />

) , (19)<br />

where Pt denotes the closing index value on day t .<br />

The descriptive statistics of the analyzed return series are presented in Table 1. In Tables 2-3 we show insample<br />

estimates of the unconditional correlations.<br />

Table 1. Descriptive statistics of the analyzed return series<br />

Index Mean Maximum Minimum Stand. Dev. Skewness Kurtosis<br />

S&P500 0.0238 10.957 -9.4695 1.289 -0.1783 10.9195<br />

S&P500 in EUR 0.0190 9.5946 -8.7688 1.4663 -0.2152 7.0311<br />

S&P500 in PLN 0.0282 10.054 -9.1886 1.4320 -0.0718 8.1544<br />

DAX 0.0276 10.797 -9.791 1.5877 -0.0635 10.9230<br />

DAX in USD 0.0325 13.5020 -9,4710 1.6741 0.0517 8.5868<br />

WIG20 0.0302 13.709 -14.161 1.9544 -0.1548 6.7066<br />

WIG20 in USD 0.0262 14.995 -19.463 2.2717 -0.2531 8.2124<br />

WIG20 in EUR 0.0212 16.368 -17.481 2.3220 -0.1743 8.6857<br />

Table 2. S&P500 and DAX. Estimates of the unconditional correlation of the returns<br />

S&P500 S&P500 in EUR<br />

DAX 0.5590 0.5309<br />

DAX in USD 0.5227<br />

Table 3. S&P500 and WIG20. Estimates of the unconditional correlation of the returns<br />

4. Empirical analysis of the stock market linkages<br />

S&P500 S&P500 in EUR S&P500 in PLN<br />

WIG20 0.2682 ----- 0.1212<br />

WIG20 in USD 0.2824 -----<br />

WIG20 in EUR ----- 0.2749<br />

The course of presented analysis is as follows. We investigate the dependencies between the returns for two pairs of<br />

indices: S&P500-DAX and S&P500-WIG20. In each case we model the dynamic copula correlations by means of<br />

the DCC-t-copula model described in section 3. The pair S&P500-DAX is considered in local currencies, in the US<br />

dollar, and in the euro. For S&P500-WIG20 we additionally take into account denomination in the Polish zloty.<br />

Moreover, we estimate jointly the dynamic copula correlations for triples of returns: S&P500-DAX-EUR/USD and<br />

S&P500-WIG20-USD/PLN. The advantage of copula models is that they allow to separate the dependence dynamics<br />

from the volatility dynamics. The feedback between these two features causes many problems in traditional analyses<br />

based on multivariate volatility models<br />

The DCC-t-copula models are estimated using a two-step maximum likelihood approach. The first step includes<br />

fitting a GARCH model to each return series (Laurent 2009). The types of fitted models differ depending on currency<br />

used to denominate an index (Table 4). Next, the GARCH standardized residuals are transformed by means of<br />

their theoretical cumulative distribution functions to obtain the series of data uniformly distributed on [0,1]. In the<br />

second step the DCC-t-copula models are fitted to the transformed series. Thus we follow the method of inference<br />

functions for margins (Joe and Xu 1996).<br />

First observation coming from Table 4 is that the conditional mean an volatility dynamics is sensitive to denomination.<br />

The DCC-t-copula parameter estimates are presented in Table 5. It is worth to notice that in the case of the<br />

pair S&P500-WIG20 denominated in local currencies the dynamics of conditional copula correlations is very week.<br />

Table 4. Types of fitted ARMA-GARCH models<br />

Return<br />

series<br />

S&P500<br />

S&P500<br />

in EUR<br />

S&P500<br />

in PLN<br />

DAX<br />

DAX<br />

in USD<br />

WIG20<br />

WIG20<br />

in USD<br />

WIG20<br />

in EUR<br />

ARMA (1,1) (0,2) (1,1) (2,2) (1,0) (0,1) (2,0) (0,0)<br />

GARCH GJR-GARCH(1,2) GJR-GARCH(1,2) GARCH(1,1) FIAPARCH(1,1) GARCH(1,1) FIAPARCH(1,1) GJR-GARCH(1,2) FIGARCH(1,1)<br />

Error<br />

distribution<br />

Skewed Student Skewed Student Skewed Student Skewed Student Skewed Student Student Student Student<br />

565


Table 5. Parameter estimates for the fitted DCC-t-copula model (standard errors in parentheses)<br />

S&P500 and DAX S&P500 and WIG20<br />

Parameter<br />

in local<br />

S&P500-DAX in local<br />

in EUR in USD in EUR in USD<br />

currencies -USD/EUR currencies<br />

in PLN<br />

S&P500-DAX<br />

-USD/EUR<br />

�<br />

0.0146<br />

(0.003<br />

0.0204 0.0185<br />

(0.006) (0.003)<br />

0.0175<br />

(0.003)<br />

0.0075<br />

(0.008)<br />

0.0109 0.0102<br />

(0.004) (0.004)<br />

0.0072<br />

(0.002)<br />

0.0125<br />

(0.003)<br />

�<br />

0.9841<br />

(0.004)<br />

0.9750 0.9805<br />

(0.008) (0.004)<br />

0.9789<br />

(0.004)<br />

0.9900<br />

(0.016)<br />

0.9836 0.9878<br />

(0.007) (0.005)<br />

0.9871<br />

(0.005)<br />

0.9821<br />

(0.005)<br />

�<br />

14.4538<br />

(3.819)<br />

13.189415.5688<br />

(3.582) (4.665)<br />

12.5504<br />

(1.843)<br />

14.6137<br />

(3.710)<br />

16.187914.9618<br />

(4.616) (3.911)<br />

11.3950<br />

(2.286)<br />

16.6111<br />

(3.004)<br />

Figure 1 shows a comparison of the dynamic copula correlations for the pair S&P500-DAX obtained in all considered<br />

cases. The dynamics of the correlations is quite strong. The strongest dependencies are observed in the years<br />

2001-2004 and 2008-2009. The values of the correlations calculated for the indices denominated in local currencies,<br />

in the euro, and modeled jointly with the exchange rate EUR/USD are quite closed each other. Only in the case of<br />

the denomination in the US dollar the correlation estimates are clearly lower. The mean levels of the estimated dynamic<br />

copula correlations (Table 6) are significantly different and the highest mean is obtained in the case of the<br />

dependencies between the indices and the exchange rate EUR/PLN modeled jointly. The null hypothesis about equal<br />

means was tested using the Model Confidence Set procedure (Hansen et al. 2003, 2011, Hansen and Lunde 2007)<br />

applied to the set of the dynamic copula correlations series.<br />

Figure 1. SP500 and DAX. Dynamic copula correlations from DCC-t-copula model<br />

The estimates of dynamic copula correlations obtained for the pair S&P500-WIG20 are much lower but show<br />

similar pattern as in the previous case – the dynamics of the conditional copula correlations is strong but it does not<br />

depend significantly on the choice of currency. The only exception concerns the clearly weaker dependencies in the<br />

case of the indices denominated in the Polish zloty. The difference is more visible after Poland joining the EU. The<br />

testing procedure, the same as in the previous considered case, indicated that the mean levels of the estimated dynamic<br />

copula correlations (Table 6) are significantly different.<br />

Table 6. Means of the dynamic copula correlation estimates<br />

S&P500 and DAX WIG20<br />

in local currencies 0.4975 0.2434<br />

in USD 0.4339 0.2345<br />

in EUR 0.4903 0.2711<br />

modeled jointly with the exchange rate 0.4992 0.2455<br />

in PLN 0.1325<br />

566


Figure 2. SP500 and WIG20. Dynamic copula correlations from DCC-t-copula models<br />

In Figures 3-10, we put together the level and volatility of the exchange rates and the differences of the dynamic<br />

copula correlations estimated based on the series of return calculated for the indices denominated in the considered<br />

currencies. There is no clear pattern visible. However, it seems that the dynamic copula correlation estimates are<br />

mostly higher when the local currencies are applied. Introducing the corresponding exchange rate into a DCC-tcopula<br />

model does not change the results significantly (Figure 7). The linkages measured for the indices denominated<br />

in local currencies are usually stronger than in the other cases. The linkages of the indices denominated in the<br />

US dollar seem to be weaker during the financial crises.<br />

Figure 3. EUR/USD Figure 4. EUR/USD. Volatility estimate<br />

Figure 5. S&P500-DAX. Differences between the dynamic copula<br />

correlations calculated for the indices denominated in local currencies<br />

and in the euro<br />

567<br />

Figure 6. S&P500-DAX. Differences between the dynamic copula<br />

correlations calculated for the indices denominated in local currencies<br />

and in the US dollar


Figure 7. S&P500-DAX. Differences between the dynamic copula<br />

correlations calculated for the indices denominated in local currencies<br />

and the dynamic copula correlations from the model fitted to<br />

the triple of return series (S&P500-DAX-EUR/PLN)<br />

Figure 9. S&P500 and DAX. Differences between the dynamic<br />

copula correlations calculated for indices denominated in euro and<br />

the dynamic copula correlations from the model fitted to the triple<br />

of return series (S&P500-DAX-EUR/PLN)<br />

Figure 8. S&P500-DAX. Differences between the dynamic copula<br />

correlations calculated for the indices denominated in the euro and<br />

in the US dollar<br />

Figure 10. S&P500 and DAX. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in the<br />

US dollar and the dynamic copula correlations from the model<br />

fitted to the triple of return series (S&P500-DAX-EUR/USD)<br />

The plots for the pair S&P500-WIG20 are shown in Figures 11-18. The highest mean level of the dynamic copula<br />

correlations occurs in the case of denomination in the euro (Table 6). Generally, the results for the pair S&P-<br />

WIG20 differ from those presented for S&P-DAX. First of all, the correlations calculated for the indices denominated<br />

in local currencies are not systematically higher than the other. Denominating the indices in the Polish zloty<br />

results in the lowest values of the correlation estimates. This is especially visible when one compares this case with<br />

the case of denomination in the euro. In the period 1995-2003 the correlations calculated between the indices denominated<br />

in the US dollar are lower than that in the euro. The situation changes in 2004 when the correlation estimates<br />

for the indices denominated in the US dollar become higher. This effect probably can be connected with Poland’s<br />

accession to the EU.<br />

Figure 11. USD/PLN<br />

568<br />

Figure 12. USD/PLN. Volatility estimates


Figure 13. S&P500-WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in local<br />

currencies and in the euro<br />

Figure 15. S&P500 and WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in local<br />

currencies and in the Polish zloty<br />

Figure 17. S&P500 and WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in the<br />

euro and the dynamic copula correlations from the model fitted to<br />

the triple of return series (S&P500-WIG20-USD/PLN)<br />

Figure 19. S&P500 and WIG20. S&P500 and WIG20. Differences<br />

between the dynamic copula correlations calculated for the indices<br />

denominated the US dollar and in the euro<br />

569<br />

Figure 14. S&P500-WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in local<br />

currencies and in the US dollar<br />

Figure 16. S&P500 and WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in local<br />

currencies and the dynamic copula correlations from the model<br />

fitted to the triple of return series (S&P500-WIG20-USD/PLN)<br />

Figure 18. S&P500 and WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in the<br />

US dollar and the dynamic copula correlations from the model<br />

fitted to the triple of return series (S&P500-WIG20-USD/PLN)<br />

Figure 20. S&P500 and WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in the<br />

US dollar and in the Polish zloty


Figure 21. S&P500 and WIG20. Differences between the dynamic<br />

copula correlations calculated for the indices denominated in the<br />

Polish zloty and the dynamic copula correlations from the model<br />

fitted to the triple of return series (S&P500-WIG20-USD/PLN)<br />

5. Conclusions<br />

Figure 22. Differences between the dynamic copula correlations<br />

calculated for the indices denominated in the euro and in the<br />

Polish zloty S&P500 and WIG20.<br />

The aim of the presented research was to determine how the dynamics of linkages between stock markets changes<br />

under the impact of introducing the exchange rate dynamics into a model. We considered dependencies between the<br />

S&P500 index and two European indices, the DAX and the WIG20. To analyze the stock indices linkages we use<br />

DCC-t-copula models. The advantage of the applied approach is that it allows to separate the dynamics of linkages<br />

from the volatility dynamics. The presented results are introductory and slightly ambiguous but generally show that<br />

the impact of denomination or introducing the exchange rate directly into the model for dependencies is weak. The<br />

patterns observed for the pairs S&P-DAX and S&P-WIG20 differ from each other. In the case of the DAX the linkages<br />

measured for the indices denominated in local currencies are mostly stronger than the other. The highest mean<br />

level of the dynamic copula correlations appears when the indices are denominated in local currencies and modeled<br />

jointly with the exchange rate EUR/PLN. For the WIG20 index the highest mean level of the correlations appears<br />

when the indices are denominated in the euro. Some impact of Poland joining the EU on the investigated dependencies<br />

can be observed. Nevertheless, our findings seem to support the opinion about lack of significant impact of<br />

denomination on the dynamics of stock indices linkages.<br />

6. References<br />

Ang A., Bekaert G. (2002), International Asset Allocation with Regime Shifts, Review of Financial Studies 15,<br />

1137-1187.<br />

Ang A., Chen J. (2002), Asymmetric correlations of equity portfolios, Journal of Financial Economics 63, 443-494.<br />

Chen S., Poon S.-H. (2007), Modelling International Stock Market Contagion Using Copula and Risk Appetite,<br />

MBS Working Paper Series, SSRN: http://ssrn.com/abstract=1024288<br />

Engle R.F. (2002), Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional<br />

heteroskedasticity models, Journal of Business and Economic Statistics 20, 339-350.<br />

Eun C.S., Shim S. (1989), International Transmission of Stock Market Movements, Journal of Financial and Quantitative<br />

Analysis, 24, 241-256.<br />

Embrechts P., McNeil A., Straumann D. (2002), Correlation and Dependence in Risk Management: Properties and<br />

Pitfalls, in: Risk Management: Value at Risk and Beyond, Cambridge University Press, Cambridge, 176-223.<br />

Hansen P.R., Lunde A., Nason J.M. (2003), Choosing the Best Volatility Models: The Model Confidence Set Approach,<br />

Oxford Bulletin of Economics and Statistics 65, 839-861.<br />

Hansen P.R., Lunde A. (2007), MulCom 1.00. Econometric Toolkit for Multiple Comparisons, Available at:<br />

www.hha.dk/~alunde/MULCOM/MULCOM.HTM<br />

Hansen P.R., Lunde A., Nason J.M. (2011), The Model Confidence Set, Econometrica 79, 453-497.<br />

Joe H., Xu J.J. (1996), The Estimation Method of Inference Functions for Margins for Multivariate Models, Technical<br />

Report no. 166, Department of Statistics, University of British Columbia.<br />

570


Karolyi G.A., Stulz R.M. (1996), Why do markets move together? An investigation of U.S.–Japan stock market<br />

comovements, Journal of Finance 51, 951–986.<br />

Koutmos G. (1992), Asymmetric Volatility and Risk Return Tradeoff in Foreign Stock Markets, Journal of Multinational<br />

Financial Management, 2, 27-43.<br />

Laurent S. (2009), Estimating and forecasting ARCH models using G@RCH TM 6, Timberlake Consultants Ltd, London.<br />

Lambert P., Laurent S. (2001), Modelling financial time series using GARCH-type models with a skewed Student<br />

distribution for the innovations, Institut de Statistique, Université Catholique de Louvain, Discussion Paper<br />

0125.<br />

Lee B., Rui O.M., Wang S.S. (2001), Information Transmission Between NASDAQ and Asian Second Board Market,<br />

Journal of Banking & Finance 28, 1637–1670.<br />

McNeil A.J., Frey A., Embrechts P. (2005), Quantitative Risk Management, Princeton University Press, Princeton.<br />

Nelsen R.B. (2006), An Introduction to Copulas, Springer Science+Business Media, Inc., New York.<br />

Patton A.J. (2004), On the Out-of-Sample Importance of Skewness and Asymmetric Dependence for Asset Allocation,<br />

Journal of Financial Econometrics 2, 130-168.<br />

Patton A.J. (2006), Modelling Asymmetric Exchange Rate Dependence, International Economic Review 47, 527-<br />

556.<br />

Patton A.J. (2009), Copula-Based Models for Financial Time Series, in: T. G. Andersen, R. A. Davies, J.-P. Kreiss,<br />

and T. Mikosch, eds., Handbook of Financial Time Series, Springer, Berlin, pp. 767-785.<br />

Rodriguez J.C. (2007), Measuring financial contagion: A copula approach, Journal of Empirical Finance 14, 401-<br />

423.<br />

Sklar A. (1959), Fonctions de répartition à n dimensions et leurs marges, Publications de l’Institut Statistique de<br />

l’Université de Paris 8, 229-231.<br />

Theodossiou P., Lee U. (1993), Mean and Volatility Spillovers across Major National Stock Markets: Further Empirical<br />

Evidence, Journal of Financial Research, 16, 327-350.<br />

Wong W.-K, Penm J., Terrel R.D, Lim K. (2004), The Relationship Between Stock Markets of Major Developed<br />

Countries and Asian Emerging Markets, Journal of Applied Mathematics And Decision Sciences, 8, 201–218.<br />

Veiga B., McAleer M. (2004), Testing the sensitivity of spillover effects across financial markets, in C. Pahl-Wostl,<br />

S. Schmidt, A.E. Rizzoli and A.J. Jakeman (eds.), Complexity and Integrated Resources Management: Transactions<br />

of the International Conference on Environmental Modelling and Software, Osnabrueck, Germany,<br />

published by iEMSs, Manno, Switzerland, 1523-1529<br />

571


PROPAGATION OF SHOCKS IN GLOBAL STOCK MARKET: IMPULSE RESPONSE ANALYSIS IN A<br />

COPULA FRAMEWORK<br />

Ryszard Doman, Associate Professor, Adam Mickiewicz University in Poznan, Faculty of Mathematics and Computer Science,<br />

Umultowska 87, 61-614 Poznan, Poland, E-mail: rydoman@amu.edu.pl<br />

Malgorzata Doman, Associate Professor, Poznan University of Economics, Department of Applied Mathematics,<br />

Al. Niepodleglosci 10, 60-697 Poznan, E-mail: malgorzata.doman@ue.poznan.pl<br />

Abstract. As the effect of increasing speed of the information spreading and ease in the capital moving, interdependencies between<br />

markets have strengthened during last few decades. Closer linkages between international markets can contribute to a faster transmission<br />

of shocks and thus have a significant impact on decision-making in risk management. Therefore, the understanding of the nature<br />

and the dynamics of international markets dependencies is of great importance in finance as well as in macroeconomic policies. To investigate<br />

the process of shock transmission, we introduce a concept of impulse response function describing the time profile of the effect<br />

of shocks on linkages modeled by means of dynamic copula model and measured using Kendall’s tau and the tail dependence<br />

coefficients. This approach is applied to analyze the persistence of the impact of some important historically observed shocks on the<br />

strength of linkages between the returns on selected stock indices during the crisis 2007-2009.<br />

1. Introduction<br />

Integration of financial markets observed in last decades is accompanied by strengthening of linkages between national<br />

markets and, in particular, can contribute to a faster transmission of shocks and thus have a significant impact<br />

on decision-making in risk management. Therefore, the understanding of the nature and the dynamics of international<br />

markets dependencies is of great importance in finance, as well as in macroeconomic policies. The measures of<br />

dependence usually used in financial markets modeling allow to describe and evaluate the strength of linkages but<br />

are not very useful in determining the direction of impact of shocks on dependencies. The aim of this paper is to<br />

analyze how shocks hitting one market influence the connection between that market and another one. For some<br />

shocks (which could be attributed to one market) this should give us a possibility to determine the direction of linkages<br />

between the markets.<br />

To investigate the process of shock transmission, we introduce a concept of impulse response function describing<br />

the time profile of the effect of shocks on linkages modeled by means of dynamic copula, and measured using<br />

Kendall’s tau and the tail dependence coefficients. This approach is applied to analyze the persistence of the impact<br />

of some important historically observed shocks on the strength of linkages between the returns on selected stock<br />

indices during the crisis 2007-2009.<br />

2. A Review of Impulse Response Functions<br />

A simple linear model useful in modeling multivariate asset returns is the vector autoregressive model VAR(p)<br />

r � � � r ��<br />

� � r � u , (1)<br />

where 0 � is a k-dimensional vector, t<br />

nonsingular covariance matrix u � , and i<br />

t � 0 1 t �1 1 t �1<br />

t<br />

u is a sequence of serially uncorrelated random vectors with mean zero and<br />

�<br />

process has the canonical MA representation<br />

are fixed ( k �k ) coefficient matrices (Tsay 2005). Suppose that the<br />

� �<br />

rt � � � �iu<br />

t�i<br />

,<br />

i�0<br />

�0 � I k . (2)<br />

Then the coefficient matrix �i can be considered as the effect of t u on the future observation rt � i . Because of this,<br />

�i is usually referred to as the impulse response function of r t . If the components of ut are correlated, the interpretations<br />

of the elements of �i is difficult. So one usually applies the Cholesky decomposition P P � �<br />

u � to obtain the<br />

uncorrelated innovations<br />

�1<br />

wt<br />

� P ut<br />

. In such a case, the decomposition (2) takes the form<br />

� �<br />

i�0<br />

r � � � � w , (3)<br />

t<br />

i<br />

572<br />

t�i


where �i � �i<br />

P is called the impulse response function of rt with orthogonal innovations r t .<br />

The generalized impulse response function for not necessary linear time series was proposed by Koop, Pesaran<br />

and Potter (1996). They consider a process of the form<br />

Y F(<br />

Y 1, � , Y ) � AV<br />

, (4)<br />

t � t�<br />

t�<br />

p<br />

where F is a known function, Vt is a k-dimensional vector of IID disturbances, At is a ( k �k ) random matrix which<br />

is a function of { Yt �1, � , Yt<br />

� p}<br />

, and the shocks have zero mean and finite variance. The generalized impulse response<br />

is then defined as<br />

GI n,<br />

� , ) � E(<br />

Y | � , � ) � E(<br />

Y | � ) , (5)<br />

Y ( t �t �1<br />

t�<br />

n t t �1<br />

t �n<br />

t �1<br />

where �t�1 denotes the set containing information used to forecast t Y , and t<br />

t<br />

t<br />

� is an arbitrary current shock. Thus the<br />

generalized impulse response function is the difference between the mean of the response vector conditional on<br />

history and a present shock, and the baseline expectation that only conditions on history. So in contrast to the linear<br />

case, now the shape of the impulse responses depends on the history of the variables and may be different in each<br />

time point. Moreover, if, for example, the impulses represent news arriving in a financial market, positive news may<br />

have quite different effects than negative news.<br />

The next step was done by Hafner and Herwartz (2006) in the framework of multivariate volatility models. For<br />

a multivariate volatility model<br />

r � � � y ,<br />

t t t<br />

t � E(<br />

rt<br />

| �t<br />

�1)<br />

yt<br />

y�<br />

t �t<br />

� H t ) | ( 1 ,<br />

� , (6)<br />

E �<br />

they consider responses of the conditional covariance matrix H t , given by the formula<br />

Vt ( � 0 ) � E(vech(<br />

H t ) | �0<br />

, ��1)<br />

� E(vech(<br />

H t ) | ��1<br />

) , (7)<br />

where vech( ) denotes the operator that stacks the parts of columns of a symmetric matrix that start from the diagonal.<br />

For many classes of multivariate GARCH models there exist explicit formulas for the impulse response functions<br />

defined by (7). Covariance matrix, however, describes dependence properly only for elliptical distributions. In general<br />

case, much better solution is to use copulas.<br />

3. Copulas and Dependence Measures<br />

Let X be an k-dimensional random vector with joint distribution F and marginal distributions F i , i , k , � 1 � . If the<br />

functions Fi are continuous, then by a theorem by Sklar (1959) there exists a unique function C such that the following<br />

decomposition holds:<br />

F( x1,<br />

�, xk<br />

) � C(<br />

F1<br />

( x1),<br />

�,<br />

Fk<br />

( xk<br />

)) .<br />

In the above situation the function C is given by the formula<br />

(8)<br />

�<br />

�<br />

C u , � , u ) � F(<br />

F ( u ), �,<br />

F ( u )) , (9)<br />

( 1 k<br />

1 1 k k<br />

�<br />

where Fi ( ui<br />

) � inf{ xi<br />

: Fi<br />

( xi<br />

) � ui}<br />

for u i �[<br />

0,<br />

1]<br />

. It follows from (9) that the function C can be seen as a multivariate<br />

distribution with the one-dimensional marginal distributions being uniform on the unit interval. On the other<br />

hand, since the marginals and the dependence structure in (9) are separated, it makes sense to interpret C as the dependence<br />

structure of the vector X. The function C is called the copula of X (or F).<br />

In this paper, we deal only with bivariate copulas, though many of the considered issues can be easily extended<br />

to the general multivariate case (Nelsen 2006, Joe 1997). The simplest copula C � describes independent marginal<br />

�<br />

�<br />

distributions and thus is defined by C ( u1<br />

, u2<br />

) � u1u2<br />

. The next important examples are C ( u1,<br />

u2<br />

) � min( u1,<br />

u2<br />

) and<br />

�<br />

C ( u1,<br />

u2<br />

) � max( u1,<br />

u2<br />

�1,<br />

0)<br />

. The first corresponds to comonotonicity or perfect dependence (one variable can be<br />

transformed almost surely into another by means of an increasing map), and the second to countermonotonicity or<br />

perfect negative dependence (one variable can be transformed almost surely into another by means of a decreasing<br />

map). A well known example is also the Gaussian copula defined as<br />

Gauss 1 �1<br />

C ( u1,<br />

u2<br />

) ( ( u1),<br />

( u2<br />

)<br />

�<br />

� � � � � � , (10)<br />

573


where � � denotes the distribution of a standard 2-dimensional normal vector with the linear correlation coefficient<br />

� , and � stands for the standard normal distribution function. In the empirical part of this paper we use the generalized<br />

Clayton (GC) copula called also the BB1 copula (Joe 1997). It is defined by the following formula:<br />

C� , �<br />

GC<br />

( u , u )<br />

1<br />

2<br />

��<br />

� ��<br />

� [(( u1<br />

�1)<br />

� ( u2<br />

�1)<br />

)<br />

� 1/<br />

�<br />

�1]<br />

�1/<br />

�<br />

, (11)<br />

where � � 0 , and � � 1 .<br />

If a copula C has density c then almost everywhere in the interior of the unit square [ 0,<br />

1]<br />

� [ 0,<br />

1]<br />

,<br />

2<br />

� C(<br />

u1,<br />

u2<br />

)<br />

c(<br />

u1,<br />

u2)<br />

� .<br />

�u1�u2<br />

(12)<br />

In the case of continuous random vector, the copula density c is related to joint density function f by the following<br />

canonical representation:<br />

f ( x1,<br />

x2<br />

) � c(<br />

F1(<br />

x1),<br />

F2<br />

( x2<br />

)) f1(<br />

x1)<br />

f 2(<br />

x2<br />

) , (13)<br />

where Fi are the marginal distribution functions, f i are the densities ( i �1,<br />

2 ).<br />

It is well documented (McNeil, Frey and Embrechts 2005) that for non-elliptical distributions the linear correlation<br />

coefficient is inappropriate measure of dependence, and often can be misleading. In that case, other measures<br />

of dependence, based on the notion of concordance provide better alternatives. An example of such measures is<br />

Kendall’s tau. Since it plays a significant role in our approach to measure impulse responses, we recall its definition.<br />

If ( X 1, X 2 ) is a random vector and ( X<br />

~<br />

,<br />

~<br />

1 X 2 ) is an independent copy of ( X 1, X 2 ) , then Kendall’s tau for ( X 1, X 2 )<br />

is defined as<br />

� X , X ) � P{(<br />

X � X<br />

~<br />

)( X � X<br />

~<br />

) � 0}<br />

� P{(<br />

X � X<br />

~<br />

)( X � X<br />

~<br />

) � 0}<br />

. (14)<br />

( 1 2<br />

1 1 2 2<br />

1 2 2 2<br />

Thus, in fact, Kendall’s tau gives probability of concordance minus the probability of discordance. If ( 1, 2 ) X X is a<br />

continuous random vector with copula C then its Kendall’s tau can be calculated by the formula<br />

� X , X ) � 4��<br />

C(<br />

u , u ) dC(<br />

u , u ) 1 .<br />

( 1 2<br />

1 2 1 2 �<br />

2<br />

[ 0,<br />

1]<br />

This means that Kendall’s tau depends only on the linking copula and, in particular, it can be considered to be a<br />

measure of the degree of monotonic dependence between X 1 and X 2 . It can also be shown that �1 ��<br />

( X , Y ) �1<br />

.<br />

Moreover, � ( X,<br />

Y ) �1<br />

is equivalent to comonotonicity, and � ( X , Y ) � �1<br />

means countermonotonicity (Nelsen<br />

Gauss<br />

2<br />

GC<br />

2006). For the Gaussian copula C� , Kendall’s tau equals arcsin( �)<br />

, and for the BB1 copula C� , � , it is known<br />

�<br />

to be equal to 1� 2 (( 2 ��<br />

) � ) .<br />

A very important concept connected with copula, relevant to dependence in extreme values, is tail dependence.<br />

If X 1 and X 2 are random variables with distribution functions F1 and F 2 , then the coefficient of upper tail dependence<br />

is defined as follows<br />

�1<br />

�1<br />

� lim P(<br />

X � F ( q)<br />

| X � F ( q))<br />

, (15)<br />

U � �<br />

q�1<br />

provided a limit � �[<br />

0,<br />

1]<br />

exists. Analogously, the coefficient of lower tail dependence is defined as<br />

L<br />

2<br />

2<br />

1<br />

1<br />

�1<br />

�1<br />

� lim P(<br />

X � F ( q)<br />

| X � F ( q))<br />

, (16)<br />

L � �<br />

q�0<br />

2<br />

2<br />

provided that a limit �L �[<br />

0,<br />

1]<br />

exists. If � U �(<br />

0,<br />

1]<br />

, then 1 X and X 2 are said to exhibit upper tail dependence. Analogously,<br />

if � L �(<br />

0,<br />

1]<br />

, then X 1 and X 2 are said to exhibit lower tail dependence. Upper (lower) tail dependence<br />

quantifies the likelihood to observe a large (low) value of 2 X given a large (low) value of X 1 . The coefficients of<br />

tail dependence depend only on the copula C of X 1 and X 2 :<br />

C(<br />

q,<br />

q)<br />

� L � lim � , q�0<br />

q<br />

(17)<br />

Cˆ<br />

( q,<br />

q)<br />

� U � lim � ,<br />

q�0<br />

q<br />

(18)<br />

where Cˆ ( u,<br />

v)<br />

� u � v �1<br />

� C(<br />

1�<br />

u,<br />

1�<br />

v)<br />

. For the Gaussian copula it holds that � � � 0 , while for the BB1 co-<br />

pula, � � 2<br />

L<br />

�1<br />

( �� )<br />

and �<br />

U<br />

1<br />

� 2 � 2<br />

�<br />

(McNeil, Frey and Embrechts 2005).<br />

574<br />

1<br />

1<br />

U � L


4. Stochastic Dynamic Copula Model<br />

The standard definition of copula has been extended by Patton (2004) to the conditional case. The only complication<br />

involved is that the conditioning set must be the same for both marginal distributions and the copula. In this<br />

paper, however, contrary to Patton’s approach, we do not assume deterministic formulas for copula dynamics but<br />

instead, similarly to Hafner and Manner (2008) and Almeida and Czado (2010), the dynamic copula parameters are<br />

modeled as stationary autoregressive stochastic processes. More specifically, we consider the following general<br />

model for 2-dimensional vector time series r r , r ) � of financial returns:<br />

t � ( 1,<br />

t 2,<br />

t<br />

1,<br />

t | �t �1 1,<br />

t 2,<br />

t | �t �1 2,<br />

t<br />

rt | t �1<br />

~ Ct ( F1,<br />

t ( �),<br />

F2,<br />

t ( �)<br />

| �t<br />

�1;<br />

θt<br />

)<br />

where �t �1<br />

is the information set that includes all realized up to time �1<br />

r ~ F ( �)<br />

, r ~ F ( �)<br />

, (19)<br />

� . (20)<br />

t returns on both considered financial<br />

instruments, Ct is the conditional copula, and θt is some vector of parameters that evolves in time. We restrict<br />

ourselves to two-parameter families of bivariate copulas, which means that θ � , � ) � . Moreover, we assume<br />

t � ( 1,<br />

t 2,<br />

t<br />

2<br />

that θt � h(<br />

γ t ) for some bijective transformation h of Euclidean plane R onto the range of θ t , and<br />

γ � μ � diag � , � )( γ � � μ)<br />

� diag(<br />

� , � ) ε , (21)<br />

t<br />

( 1 2 t 1<br />

1 2<br />

1 � � 2 � .<br />

where ~ ( , 2)<br />

I 0 εt NID , | � 1 | � 1 , | � 2 | � 1 , � 0 , 0<br />

In addition, when it is the case that for the applied copula family the parameters can be expressed by the upper<br />

and lower tail dependence coefficients:<br />

�1 � g1( �L<br />

, �U<br />

) , �2 � g2 ( �L<br />

, �U<br />

) ,<br />

we propose that dynamics enters the model through them:<br />

(22)<br />

� � � ) ( 1�<br />

exp( � )) , � � � ) ( 1�<br />

exp( � )) . (23)<br />

L,<br />

t exp( 1,<br />

t<br />

1,<br />

t<br />

5. Posterior and Predictive Inference for the Copula Model<br />

U , t exp( 2,<br />

t<br />

2,<br />

t<br />

We are going to use the introduced stochastic dynamic copula model to define impulse response functions describing<br />

the impact of past shocks on Kendall’s tau, and lower and upper tail dependence coefficients. Because of the<br />

presence of the latent processes γt and the necessity of prediction with very long horizon by a highly nonlinear<br />

model, we perform a two-stage estimation. In the first step, univariate ARMA-GARCH models are fitted, and the<br />

standardized residuals are transformed by means of the corresponding distribution functions into the series<br />

u � ( 1,<br />

, 2,<br />

) �<br />

t u t u t of uniform variates. In the second step, the series ut is considered as the data in the Bayesian posterior<br />

and predictive inference for the copula model. Here we use the Metropolis within Gibbs sampler (see e.g. Geweke<br />

2005) for the model parameters α � ( μ,<br />

β,<br />

ν)<br />

� , as well as for the latent variables γ � ( γ : t � T � K)<br />

� , and the<br />

F<br />

missing observations u � , u ) : T � t � T � K)<br />

� . In our case, the objective of inference can generally be ex-<br />

(( u1, t 2,<br />

t<br />

pressed as the posterior density of a vector of interest, ω , conditional on the observed data U � u , �,<br />

u } :<br />

�<br />

�<br />

p T<br />

T<br />

T<br />

t<br />

T<br />

t<br />

{ 1 T<br />

( ω | U ) � p(<br />

ω | θ,<br />

U ) p(<br />

θ | U ) dθ<br />

, (24)<br />

where the vector of interest, ω , will stand for future values of Kendall’s tau, and lower and upper tail dependence<br />

coefficients.<br />

The Bayesian approach enables us to calculate the responses basing not only on predictive moments but also on<br />

predictive quantiles, and, in particular, the predictive median. This is because both the mean, E (ω)<br />

, and the quantiles<br />

of the distributions of the components of ω are Bayes actions corresponding to some loss functions.<br />

m<br />

The elements of a Bayesian decision problem (Berger 1985, Geweke 2005) are: an action a� A � R controlled<br />

by the decisionmaker, a loss function L ( a,<br />

ω)<br />

(� 0)<br />

depending on the action and a vector of interest,<br />

and a distribution p ( ω | Y)<br />

. A Bayesian action is any action a � A � R<br />

ω � �<br />

q<br />

� R ,<br />

m<br />

ˆ which minimizes the posterior expected<br />

loss E ( L(<br />

a, ω)<br />

| Y)<br />

� � L(<br />

a,<br />

ω)<br />

p(<br />

ω | Y)<br />

dω<br />

. If<br />

�<br />

L ( a, ω)<br />

� ( a � ω)<br />

�Q(<br />

a � ω)<br />

, where Q is a positive definite matrix,<br />

then the Bayes action aˆ � E(<br />

ω | Y)<br />

. If a � A � R , � � � � R , and L(a,ω) is a linear-linear function given by the<br />

575


� , q �(<br />

0,<br />

1)<br />

, then the Bayes action â is the qth quantile of<br />

formula L( a,<br />

) � ( 1�<br />

q)(<br />

a � �)<br />

I ( ��,<br />

a)<br />

( �)<br />

� q(<br />

� � a)<br />

I ( a,<br />

�)<br />

the posterior distribution of � .<br />

For our purposes, it is important that, under some regularity conditions, Bayes actions can be approximated by<br />

Markov Chain Monte Carlo (MCMC) sampling. More specifically, the following result holds (Geweke 2005, Amemiya<br />

1985). Suppose that in the Markov chain C the sequence<br />

( m) ( m)<br />

( θ , ω ) is ergodic with invariant density<br />

p ( θ | Y)<br />

p(<br />

ω | θ,<br />

Y)<br />

. Let L( a,<br />

ω)<br />

� 0 be a loss function defined on A � � and suppose that the risk function<br />

� �<br />

R ( a) � L(<br />

a,<br />

ω)<br />

p(<br />

θ | Y)<br />

p(<br />

ω | θ,<br />

Y)<br />

dθdω<br />

(25)<br />

� �<br />

has a strict global minimum at<br />

m<br />

aˆ � A � R . Then under some additional regularity conditions, for any � � 0 ,<br />

lim M ��<br />

P(inf<br />

a�A<br />

�1<br />

( a � aˆ<br />

) �(<br />

a � aˆ<br />

) � � | C)<br />

� 0 , where AM is the set of roots of M<br />

M<br />

( m)<br />

�L(<br />

a , ω ) �a<br />

� 0 .<br />

6. MCMC Implementation<br />

M<br />

�m �1<br />

We perform inference for our stochastic dynamic copula model by using the OpenBUGS package (Thomas et al.<br />

2006). This is a freely available software package. The acronym BUGS stands for the initials of the phrase “Bayesian<br />

inference Using Gibbs Sampling”. The user of the package can specify a statistical model of (almost) arbitrary<br />

complexity, by stating the relationships between related variables. The software includes an expert system for choosing<br />

an effective MCMC scheme for each full conditional posterior distribution. The user then controls the execution<br />

of the scheme and can modify it.<br />

As concerns priors, we assume that the model parameters ( �1, �2,<br />

�1,<br />

�2<br />

, �1,<br />

� 2 ) are mutually independent. The<br />

prior distributions are specified as follows:<br />

� � ~ N(<br />

0,<br />

100)<br />

i<br />

i � �i<br />

� 2 1<br />

* *<br />

� � , � ~ beta( 20,<br />

2.5)<br />

i<br />

2<br />

� � i ~ Inverse-gamma ( 2.<br />

5,<br />

0.<br />

025)<br />

In determining the number of iterations to achieve convergence to the stationary distribution, we apply diagnostics<br />

implemented in the CODA (Best, Cowles and Vines 1996) and BOA (Smith 2005) R packages. They include the<br />

Geweke, Gelman-Rubin, Raftery-Lewis, and Heidelberger-Welch diagnostics.<br />

1. A Generalized Impulse Response Function Associated with a Bayesian decision problem<br />

Let p( ω | Y,<br />

θ)<br />

be the posterior density of some vector of interest ω given a vector of unobservables θ and a vec-<br />

tor of observables Y . For a given loss function L let t â and aˆ t�1<br />

denote, respectively, the Bayes action corresponding<br />

to the situation where the set of observables consists of the data U t�1<br />

and a shock ν t , and of the data U t�1<br />

only.<br />

Suppose that the Bayes actions â t and aˆ t�1<br />

are strict global minima of the corresponding risk functions (25). Under<br />

the above assumptions, we define the generalized impulse response function as<br />

GIR a , L,<br />

ω ( ν t , Yt<br />

�1<br />

) � aˆ<br />

t � aˆ<br />

t�1<br />

. (26)<br />

In our investigation concerning the copula based dependence measures, as the vector of interest, ω T �n<br />

, we consider<br />

the future values of Kendall’s tau, and lower and upper tail dependence coefficients.<br />

2. The Data<br />

We investigated dependencies between the daily returns on several selected stock indices. In this paper, however, we<br />

report only the results obtained for S&P500 and DAX, and two chosen shocks. We model the time series of percentage<br />

logarithmic daily returns given by the formula<br />

r (ln P � ln P )<br />

t � 100 t t�1<br />

where Pt is the value of stock index on day t.<br />

The first considered shock is a negative one of September 29, 2008. It corresponds to the following event: “The<br />

US House of Representatives rejected the Bush administration’s $700 billion emergency rescue plan. […] The Dow<br />

576


Jones industrial average lost 777.68 points, its biggest single-day fall ever, easily beating the 684 points it lost on the<br />

first day of trading after the Sept. 11, 2001, terrorist attacks. Crude oil futures closed down $10.52 in their biggest<br />

decline since Jan 17, 1991, when the US opened strategic oil reserves during the first Gulf war”.<br />

The other considered shock, positive one, from October 13, 2008 is connected with the event: “Stock markets<br />

rejoiced after governments worldwide launched multibillion-dollar bailouts to shore up banks, and Britain called for<br />

a new Bretton Woods agreement to reshape the world financial system. The US Central Bank said it would provide<br />

unlimited dollars the European Central Bank, the Bank of England and the Swiss National Bank. Britain committed<br />

£37 billion ($64 billion) to capitalize its big banks. Wall Street rebounded with the biggest stock rally since the<br />

Great Depression. The DJIA rose 936 points to close at 9,387.61, its largest point gain ever and one of its largest<br />

percentage increases” * .<br />

The data used for estimation included in each case 600 observations preceding the shock. Thus in the case of the<br />

first of the described shocks the data cover the period from April 28, 2006 to September 29, 2008, and in the second,<br />

case the period under scrutiny is from May 5, 2006 to October 13, 2008.<br />

3. Empirical Results<br />

In this Section we present the results of the posterior and predictive inference applied to the considered data and<br />

shocks. We fitted to the data a stochastic dynamic copula model described in section 4, based on the BB1 copula<br />

defined by (11). Tables 1-4 contain the posterior summaries for the model parameters. In each case, the inference<br />

was performed for the set of observation until the day preceding the shock, and for the dataset including shock. The<br />

estimation results differ quite clearly after the shock including.<br />

Table 1. September 29, 2008. Posterior summaries for the model parameters before the shock<br />

Parameter Mean Std. Deviation 2.5% Median 97.5%<br />

�1 -1.6600 0.3554 -2.3790 -1.5870 -1.1230<br />

�2 -0.5094 0.1261 -0.7746 -0.5099 -0.2414<br />

�1 0.7713 0.1246 0.4757 0.7926 0.9526<br />

�2 0.7810 0.1157 0.4924 0.8041 0.9393<br />

�1 0.1182 0.0475 0.0616 0.1062 0.2456<br />

� 2<br />

0.1179 0.0410 0.0632 0.1093 0.2228<br />

Results obtained after 500 000 iterations (discarding 500 000 as burnt-in)<br />

Table 2. September 29, 2008. Posterior summaries for the model parameters including the shock<br />

Parameter Mean Std. Deviation 2.5% Median 97.5%<br />

�1 -1.3380 0.3349 -2.0540 -1.3090 -0.7089<br />

�2 -0.6012 0.1600 -0.8975 -0.6077 -0.2639<br />

�1 0.8125 0.1267 0.5108 0.8374 0.9786<br />

�2 0.8058 0.1184 0.5136 0.8323 0.9567<br />

�1 0.1341 0.0656 0.0663 0.1190 0.2947<br />

� 2<br />

0.1286 0.0591 0.0647 0.1131 0.2883<br />

Results obtained after 500 000 iterations (discarding 500 000 as burnt-in)<br />

Table 3. October 13, 2008. Posterior summaries for the model parameters before the shock<br />

* The both shock descriptions are taken from http://timelines.ws/<br />

577


Parameter Mean Std. Deviation 2.5% Median 97.5%<br />

�1 -1.7080 0.4739 -2.4530 -1.8120 -0.6341<br />

�2 -0.5308 0.1677 -0.9146 -0.5121 -0.2453<br />

�1 0.7739 0.1220 0.4849 0.7958 0.9437<br />

�2 0.8023 0.1248 0.4975 0.8278 0.9653<br />

�1 0.1184 0.0488 0.0614 0.1064 0.2513<br />

� 2<br />

0.1320 0.0663 0.0638 0.1137 0.3114<br />

Results obtained after 500 000 iterations (discarding 500 000 as burnt-in)<br />

Table 4. October 13, 2008. Posterior summaries for the model parameters including the shock<br />

Parameter Mean Std. Deviation 2.5% Median 97.5%<br />

�1 -1.4400 0.2417 -1.8650 -1.4550 -0.9399<br />

�2 -0.5947 0.1332 -0.8882 -0.5844 -0.3568<br />

�1 0.7739 0.1337 0.4533 0.7961 0.9570<br />

�2 0.7938 0.1176 0.5123 0.8166 0.9504<br />

�1 0.1203 0.0568 0.0616 0.1056 0.2770<br />

� 2<br />

0.1182 0.0437 0.0617 0.1085 0.2312<br />

Results obtained after 500 000 iterations (discarding 500 000 as burnt-in)<br />

Next, basing on the obtained posterior predictive simulation data, the generalized impulse response functions<br />

were calculated for Kendall’s tau and lower and upper tail coefficients, taking the corresponding predictive mean,<br />

median, and 2.5 and 97.5 percentiles as the Bayes actions. In figures 1-6, we present plots for the medians. To understand<br />

the reaction of linkages between S&500 and DAX, or, in other words, between the American and European<br />

stock markets, on the shock, we analyze the impulse responses for the three measures of dependence.<br />

0.001<br />

0.000<br />

- 0.001<br />

- 0.002<br />

- 0.003<br />

- 0.004<br />

- 0.005<br />

- 0.006<br />

- 0.007<br />

0 10 20 30 40 50 60 70 80 90 100<br />

Figure 1. September 29, 2008. Impulse response function for Kendall’s tau median<br />

578


0 .0 4 5<br />

0 .0 4 4<br />

0 .0 4 3<br />

0 .0 4 2<br />

0 .0 4 1<br />

0 .0 4<br />

0 .0 3 9<br />

0 .0 3 8<br />

0 .0 3 7<br />

0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0<br />

Figure 2. September 29, 2008. Impulse response function for the lower tail dependence coefficient median<br />

- 0 .0 1 5<br />

- 0 .0 1 7<br />

- 0 .0 1 9<br />

- 0 .0 2 1<br />

- 0 .0 2 3<br />

- 0 .0 2 5<br />

- 0 .0 2 7<br />

- 0 .0 2 9<br />

0 2 0 4 0 6 0 8 0 1 0 0<br />

Figure 3. September 29, 2008. Impulse response function for the upper tail dependence coefficient median<br />

The plots presented in figures 1-3 show us the impulse response in the case of a negative shock hitting the USA<br />

economy. The shapes of all three plots are very similar. However, in the case of Kendall’s tau and the upper tail<br />

dependence coefficient, we can observe a negative impact on the linkages, strengthening during first 30 days. For<br />

the lower tail dependence we have a positive impact on connections, weakening in the first period. The complicated<br />

dynamics of the impulse responses is a result of the high non-linearity in the dependence models. These results are<br />

in agreement with our expectations. The shock was connected with the American economy, so it did not touch the<br />

German stock market in comparable high degree. The weakening of the linkages is thus quite understandable. The<br />

increase in lower tail dependence is, however, an effect of the fear connected with the uncertainty in financial markets.<br />

579


0 .0 0 0<br />

-0 .0 0 1<br />

-0 .0 0 2<br />

-0 .0 0 3<br />

-0 .0 0 4<br />

-0 .0 0 5<br />

-0 .0 0 6<br />

-0 .0 0 7<br />

-0 .0 0 8<br />

-0 .0 0 9<br />

0 .0 4 7<br />

0 .0 4 6<br />

0 .0 4 5<br />

0 .0 4 4<br />

0 .0 4 3<br />

0 .0 4 2<br />

0 .0 4 1<br />

0 .0 4<br />

0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0<br />

Figure 4. October 13, 2008. Impulse response function for Kendall’s tau median<br />

0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0<br />

Figure 5. October 13, 2008. Impulse response function for the lower tail dependence coefficient median<br />

-0 .0 0 5<br />

- 0 .0 0 7 5<br />

-0 .0 1<br />

- 0 .0 1 2 5<br />

-0 .0 1 5<br />

- 0 .0 1 7 5<br />

-0 .0 2<br />

- 0 .0 2 2 5<br />

-0 .0 2 5<br />

0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0<br />

Figure 6. October 13, 2008. Impulse response function for the upper tail dependence coefficient median<br />

The pattern of reactions in the case of the analyzed positive shock is very similar. We can observe a negative<br />

impact on linkages measured by Kendall’s tau and the upper tail dependence coefficient and positive in the case of<br />

lower tail coefficient. The difference is that during the first 10 days the positive impact on lower tail dependence is<br />

strengthening. So, it seems that a positive news results in stronger connection between indices during market panic<br />

days. It should be admitted, however, that the period under scrutiny was rather a turbulent one, and the presented<br />

example can be misleading.<br />

580


4. Conclusions<br />

The understanding of the nature and the dynamics of dependencies between international markets is of great importance<br />

in finance, as well as in macroeconomic policies. To investigate the process of shock transmission, we proposed<br />

a concept of impulse response function describing the time profile of the effect of shocks on linkages modeled<br />

by means of dynamic copula and measured using Kendall’s tau and the tail dependence coefficients. This approach<br />

was applied to analyze the persistence of the impact of chosen important historically observed shocks on the strength<br />

of linkages between the returns on the S&P500 and DAX indices in 2008. Our results show that the introduced<br />

notion can be a useful tool allowing to perform a deeper analysis of the reaction of the dependencies on various<br />

kinds of shocks.<br />

5. References<br />

Almeida C., Czado C. (2010), Efficient Bayesian inference for stochastic time-varying copula models, Preprint,<br />

Chair of Mathematical Statistics, Technische Universität München.<br />

Amemiya T. (1985), Advanced Econometrics, Harvard University Press, Cambridge, MA.<br />

Berger J.O. (1985), Statistical Decision Theory and Bayesian Analysis, Springer, New York.<br />

Best N., Cowles M., Vines K. (1996), CODA: Convergence Diagnostics and Output Analysis Software for Gibbs<br />

Sampling Output, Version 0.30, MRC Biostatistics Unit, Institute of Public Health, Cambridge, UK.<br />

Geweke J. (2005), Contemporary Bayesian Econometrics and Statistics, Wiley, New York.<br />

Hafner C.M., Herwartz H. (2006), Volatility impulse responses for multivariate GARCH models: An exchange rate<br />

illustration, Journal of International Money and Finance 25, 719-740.<br />

Hafner C.M., Manner H. (2008), Dynamic stochastic copula models: Estimation, inference and Applications, ME-<br />

TEOR Research Memorandum RM/08/043, Maastricht University.<br />

Joe H. (1997), Multivariate Models and Dependence Concepts, Chapman & Hall/CRC, New York.<br />

Koop G., Pesaran M.H., Potter S.M. (1996), Impulse response analysis in nonlinear multivariate models, Journal of<br />

Econometrics 74, 119-147.<br />

McNeil A., Frey R., Embrechts P. (2005), Quantitative Risk Management, Princeton University Press, Princeton.<br />

Nelsen R.B. (2006), An Introduction to Copulas, Springer Verlag, 2nd Edition, New York.<br />

Patton, A.J. (2004), On the Out-of-Sample Importance of Skewness and Asymmetric Dependence for Asset Allocation,<br />

Journal of Financial Econometrics 2, 130-168.<br />

Sklar A. (1959), Fonctions de répartition à n dimensions et leurs marges, Publications de l’Institut Statistique de<br />

l’Université de Paris 8, 229-231.<br />

Smith B. (2005), Bayesian Output Analysis Program (BOA) Version 1.1.5, User’s Manual, Technical Report, Department<br />

of Public Health, The University of Iowa.<br />

Thomas A., O’Hara B., Ligges U., Sturtz S. (2006), Making BUGS Open, R News 6, 12-17.<br />

Tsay R. (2005), Analysis of Financial Time Series, 2nd Edition, Wiley-Interscience, Hoboken.<br />

581


STOCK SPLITS AND HERDING<br />

Maria Chiara Iannino, Queen Mary University of London, UK<br />

Email:m.c.iannino@qmul.ac.uk<br />

Abstract. This paper addresses institutional herding in the case of the announcement of a stock split. The analysis consists in<br />

investigating whether companies that announce stock splits exhibit a systematic abnormal level of herding, and whether the intensity<br />

of this phenomenon helps to explain the market reaction to the event. Using data on the trading activity of US institutional investors,<br />

from 1994 to 2005, institutional herding is captured by the correlation between the institutional demand over two consecutive periods.<br />

The results are consistent with a positive and highly significant level of imitative behaviour, particularly between 1998 and 2001. Big<br />

investors tend to herd more on splitting companies in boom markets. Further analyses factor out the effect of fundamentals and<br />

common public information, showing passive strategies and non-intentional herding are affecting more the institutional demand for<br />

nonsplitting companies. We decompose herding into the contributions of several types and the results are congruent with the presence<br />

of informational cascades at the split announcement. In particular, the difference in herding between the two groups of companies is<br />

explained predominantly by proxies for the quality of the information, as the dispersion of beliefs among analysts. Herding on<br />

nonsplitting companies is instead motivated more by characteristic preference. Finally, the imitative behaviour we observe for splitting<br />

stocks has a stabilizing effect on future returns. This result supports the hypothesis that an informational content is included in the<br />

announcement of the event, and the market underreacts to it.<br />

Keywords: Herding, Institutional investors, Stock Splits, Informational Cascades<br />

JEL classification: G11 G14 G20.<br />

1 Introduction<br />

This paper addresses institutional herding in the specific occurrence of a stock split. This event is still a puzzling<br />

phenomenon because of the abnormal market reaction following its announcement and occurrence (among many,<br />

Lakonishok et al., 1986, Ikenberry et al., 2002). The presence of imitative behaviour could, on one hand, exacerbate<br />

suboptimal decisions in the functioning of the markets and to the reaction to the announcements. On the other hand,<br />

a stabilizing herd behaviour would help prices to aggregate more quickly any informational content that is driven by<br />

the event. In the light of previous literature, that evince an informational content on the announcement of stock<br />

splits, we investigate whether companies that announce stock splits exhibit a systematic abnormal level of herding<br />

with respect to the rest of the market.<br />

Using data on quarterly stocks holdings of US institutional investors, 1994 to 2005, we measure institutional<br />

herding as the correlation between trades among financial institutions over two consecutive periods of time (as Sias,<br />

2004). The analysis proceeds in three steps. First, we measure the level of correlation among investors' decisions<br />

both in the overall market and in a subsample of companies that have announced at least one stock split in the<br />

quarter. Then, we propose an analysis of the motivations of this behaviour according to the theoretical literature and<br />

in particular of the motivations behind the difference in herding between spitting and nonsplitting companies.<br />

Finally, we investigate the stabilizing effect of herding on splitting stocks. 1<br />

Our results are consistent with the presence of a significant level of imitative behaviour. It is particularly intense<br />

in moments of crisis for nonsplitting companies, while big investors tend to herd more on splitting companies in<br />

boom markets. In particular, the difference in herding between the two groups of companies is explained<br />

predominantly by proxies for the quality of the information, consistently with a signal hypothesis in the event<br />

announcement. Moreover, such imitative behaviour on splitting stocks has a stabilizing effect on future returns, as<br />

the market underreacts to a positive informational content in the event.<br />

1 It still persists a gap between the empirical literature, verifying both the presence and the causes of herding among investors in real-functioning<br />

markets, and the theoretical developments on the motivations to herd. Because of the lack of data on the private signals and communications<br />

among agents, empirical tests are based on the excess level of correlation with respect to a benchmark of independence (Lakonishok et al., 1992,<br />

Sias, 2004). It is a clear and easy-to-test definition of institutional herding, however, it does not allow to investigate the coordination mechanism<br />

between agents, and therefore, to distinguish among different reasons to herd.<br />

582


The remainder of this paper ensures as follows. Section 2 describes the methodology we use to detect, measure<br />

and motivate herding in our samples. Section 3 describes the data and discusses the main empirical results. Section 4<br />

concludes offering some final remarks.<br />

2 Sample and methodology<br />

We carry out the empirical investigation on quarterly US stock holdings by financial institutions, extracted from the<br />

Thompson Financial database, over a twelve-year period. We consider all types of professional investment<br />

companies and advisors who are asked to fill the 13F form according to the SEC regulations. Information are<br />

completed by market data about the companies, such as stock splits data, prices and capitalization, from the CRSP<br />

daily database, and data about the analysts' forecasts and coverage from the I/B/E/S monthly database, aggregated<br />

per quarter. The overall sample is composed of 1,760 companies, traded by 3,690 investors. We select two<br />

subsamples of splitting and nonsplitting stocks. A splitting stock is a company that has announced at least one split<br />

in the quarter of analysis, according to CRSP. We have 1,602 announced events by 890 companies.<br />

The starting point of our analysis is the estimation of the intertemporal correlation of the institutional demand<br />

(as Sias, 2004). In the presence of herding, a quarter’s trading actions are affected by previous quarter trades. The<br />

potential level of herding in quarter t is measured as the correlation across companies between the standardized<br />

fraction of buyers of stock i in the quarter t, Δi,t, and the analogous proportion in the previous period t-1:<br />

Δi,t = βt Δi,t-1 + εi,t<br />

where Δi,t=(Pi,t-Pt)/σt; Pi,t is the fraction of institutional buyers of stock i at the end of quarter t; Pt and σt are the<br />

mean and standard deviation of the proportions Pi,t across companies. A positive coefficient βt is consistent with<br />

investors following the past aggregate behaviour of all the institutional investors in the market. We first estimate the<br />

betas on the overall sample, and then in each of the two subsamples of splitting and nonsplitting stocks. 2<br />

We perform further analyses on the betas in order to account for the influence of factors other than intentional<br />

herding. In fact, if investors are exposed to similar market conditions, passive trading strategies and correlated<br />

information, they could exhibit clustered, but nonvoluntary behaviour. We factor out the effect of fundamentals and<br />

common public information, regressing the estimated coefficients on the four factors of Carhart (1997):<br />

βt = α + γHML HMLt + γSMB SMBt + γMt RMt + γMOM MOMt + εt<br />

where HMLt, SMBt, RMt and MOMt are the returns on value-weighted zero-investment factors that mimic<br />

portfolios for, respectively, book-to-market, company size, market returns and momentum, in quarter t. The<br />

coefficients of the factors indicate the loadings of the total beta βt ("Sias' beta") that is attributable to fundamentaldriven<br />

clustering. Then, βt = (α + εt) corresponds to a clean measure of intentional herding for the quarter t,<br />

"conditional on the market conditions", or "beta adjusted". As before, we distinguish between splitting stocks and<br />

nonsplitting stocks and regress the previous equation separately in the two samples.<br />

The second part of the analysis aims to investigate the reasons behind the presence of such observed level of<br />

herding and the difference between splitting and nonsplitting stocks. We impose and test specific assumptions for<br />

four theoretical motivations for herding. In particular, we test whether herding on splitting companies is<br />

informational- or characteristics-based. In uncertain informational environment, agents facing decisions rationally<br />

ignore their noisy and imperfect private information, causing informational cascades to arise (Bikhchandani, 1992,<br />

Avery, 1998). Moreover, under the same informational conditions, rational agents might mimic the investment<br />

decisions of other agents in order to maximize their reputation (Scharfstein, 1990, Dasgupta, 2011). Therefore, we<br />

identify market or company conditions proxies for imperfect information (Wermers, 1999, Chan, 2005) as small<br />

market capitalization, high dispersion of analysts' forecasts and low analysts' coverage. Operationally, we regress the<br />

2<br />

In all the analysis, we investigate the difference in herding between splitting and nonsplitting companies either estimating all the models<br />

separately in each sample, or employ model specifications with binary variables δi,t S , that assumes value 1 if the company has announced at least<br />

one stock split in the quarter of interest. We interact the dummy with the lag institutional demand and, in the following models, including also as<br />

many interacted dummies as the number of regressors.<br />

583


institutional demand on its lag, decomposing the total beta between the effect from the information quality proxies,<br />

Xi,t-1 and other unspecified factors:<br />

C<br />

� = �NIH, t �i,<br />

t-1<br />

+<br />

i, t ��c, tX<br />

c, i, t-1�<br />

i, t-1<br />

� �i,<br />

t<br />

c�1<br />

The coefficients ϕc,t are catching the effect of informational-based herding, in the form of informational<br />

cascades or reputational herding; βNIH,t represents the remaining part of the total beta that cannot be attributed to<br />

informational contents, while βIH,t = (βt - βNIH,t) represents the "Informational Beta". 3<br />

Else than informational-based, a correlation of trades can arise because of investors sharing the same trading<br />

strategies. Gompers et al. (2001) consider the impact of three main variables on the institutions’ demand for stocks:<br />

prudence or regulations, liquidity of the stocks and the historical returns pattern. We distinguish characteristicsbased<br />

herding controlling for variables which mirror such stock characteristics: annual cash dividends per quarter<br />

and volatility of the stock, as proxies for prudence; market capitalization, price per share and share turnover, as<br />

liquidity; and returns over the previous year, as historical pattern of returns. Operationally, we regress the<br />

institutional demand on its lag, decomposing the relation between the effect of the characteristics of the company i at<br />

the quarter t-1 and other unspecified factors:<br />

Q<br />

�i, t = �NCH, t �i,<br />

t-1<br />

+ �� c, tZc,<br />

i, t-1�<br />

i, t-1<br />

� �i,<br />

t<br />

q�1<br />

where ψt is the vector of coefficients of the Q company characteristics; βNCH,t is the remaining part of the Sias'<br />

beta that is not attributable to characteristics preference among investors, while βCH,t = (βt - βNCH,t) is the<br />

"Characteristics Beta".<br />

Institutional investors could also herd because there are momentum traders, leading to a positive relation<br />

between the demand for stocks of quarter t and the past returns of the stocks. 4 Freely following Sias (2007), we<br />

decompose the total correlation between the past returns effect and other factors, adding the lag returns interacted<br />

with the lag demand, as:<br />

Δi,t = βNMT,t Δi,t-1 + ρt Ri,t-1 Δi,t-1 + εi,t<br />

βNMT,t is the remaining part of the correlation not explainable by momentum trading, while βMT,t = (βt -βNMT,t) is<br />

the "Momentum beta".<br />

We replicate all the previous analysis on the motivations to herd for splitting and nonsplitting companies, and<br />

we adjust the betas for the Carhart factors.<br />

The third and final step is to test for the effect of herding on the future returns of companies. Past literature<br />

shows a positive relation between institutional demand and same quarter or previous quarter returns, and a weak<br />

positive correlation with future returns (see, among many, Nofsinger, 1999; Grinblatt, 1995; Sias, 2004). A negative<br />

relation between demand and subsequent returns will be consistent with a destabilizing effect on prices due to<br />

herding, particularly of intentional imitative behaviour either irrational or positive feedback-driven. Alternatively,<br />

either an intentional correlation due to informational motivations or a fundamental-driven correlation will bring the<br />

prices closely and quickly towards the true value, as a stabilizing effect (Sias, 2004). Therefore, we regress the<br />

institutional demand on the past quarter, the same period, and on the two consecutive quarter's returns after the<br />

measurement period.<br />

3 Looking more carefully at the distinction between reputation concerns and informational cascades, we test for herding looking separately at<br />

different institutional types and sizes of the managed portfolio, drawing considerations on the correlation of their trades within the same group or<br />

extra group. The results of this analysis are not reported in this version of the paper.<br />

4 Thus, we take into consideration the possibility of a confounding effect in the beta coefficient, which comes from the fact that the past demand<br />

proxies last quarter returns if there is momentum among investors.<br />

584


3 The main results<br />

3.1 Sias’ Beta<br />

Looking at the presence of herding, Table 1 reports the results from the Sias’ model, estimated on the overall<br />

market, the splitting and nonsplitting samples. As we can see, the estimated coefficients in the overall market are all<br />

positive and statistically significant in all quarters of analysis. This result is consistent with the hypothesis of a level<br />

of herding in the trading decisions of institutions. On average, the beta is 0.457 across all quarters, ranging from<br />

0.346 in 2005 to 0.562 in 2000. The phenomenon is particularly intense in moments of crisis, as in the years 1998 to<br />

2001.<br />

Restricting the analysis to splitting stocks, we observe a negligible difference on the beta coefficients with<br />

respect to the nonsplitting sample. On average, the estimated coefficient for herding in the splitting companies is<br />

very close (0.467) to the alternative group, but more volatile across quarters. However, even if the difference in<br />

means is not statistically significant, the median results are still clearly higher (0.471 against 0.442) for the group of<br />

interest.<br />

Sias' models<br />

1) Overall market: Dependent variable: Δ i,t<br />

Variables<br />

Mean<br />

estimated se<br />

t median min max Q1 Q3<br />

Significant<br />

pos. qrts.<br />

Significant<br />

neg. qrts.<br />

Beta Adjusted<br />

mean se<br />

Δi,t-1 0.457 *** 0.008 56.66 0.447 0.346 0.562 0.417 0.498 48 0 0.448 *** 0.0079<br />

2) Splitting companies: Dependent variable: Δ S i,t<br />

Variables<br />

Mean<br />

estimated se<br />

t median min max Q1 Q3<br />

Significant<br />

pos. qrts.<br />

Significant<br />

neg. qrts.<br />

Beta Adjusted<br />

mean se<br />

Δ S i,t-1 0.454 *** 0.031 14.83 0.471 -0.074 0.915 0.336 0.583 36 0 0.461 *** 0.0290<br />

3) Non-splitting companies: Dependent variable: Δ NS i,t<br />

Variables<br />

Mean<br />

estimated se<br />

t median min max Q1 Q3<br />

Significant<br />

pos. qrts.<br />

Significant<br />

neg. qrts.<br />

Beta Adjusted<br />

mean se<br />

Δ NS i,t-1 0.453 *** 0.008 55.36 0.442 0.344 0.559 0.414 0.497 48 0 0.444 *** 0.0080<br />

Test: Beta S = Beta NS t = 0.0426 (0.9662) Beta adj: t = 0.5941 (0.5549)<br />

Table 1. The table reports the summary statistics of the coefficients of the lag institutional demand (Sias' Betas) estimated in each<br />

quarter of analysis, from 1994 to 2005, for the models based on Sias (2004). Institutional demand Δi,t, as the fraction of buyers of stock i<br />

at quarter t, is firstly regressed on its lag Δi,t-1 in the overall sample (Model 1). Model 2 considers the institutional demand computed on<br />

the sample of splitting companies regressed on the lag demand for all stocks. Analogously, Model 3 is applied to the sample of<br />

nonsplitting companies. The reported t-values are computed from the standard error of the estimates series. The numbers of significant<br />

quarters are identified with a 10% of significance level. The last column considers the Beta adjusted, as the lag coefficients, once<br />

controlling for the four factors à la Cahart (1997). We finally report the statistic and the p-value of the tests on difference between<br />

splitting and nonsplitting samples for the betas and the betas adjusted.<br />

Looking at three subperiods of four years each, we see that investors tend to herd slightly more when they trade<br />

on splitting companies in the subperiod from 1994 to 2001, while we observe a higher herding on nonsplitting<br />

companies, even if still not significant, from 2002 onwards (Fig. 1.1). In period of crisis, herding appears to increase<br />

for nonsplitting companies, consistently with the literature (Lakonishok et al., 1996). Splitting stocks are instead<br />

affected by market crises as their frequency decreases, but not in the intensity of the herding phenomenon.<br />

This variation over time, and the negligible average difference between the two groups, motivates additional<br />

analysis. Taking into account the effect of different trading activity among companies, we see that convergence of<br />

behaviour increases with the trading activity on the company, and herding is more likely to occur among<br />

nonsplitting stocks once we take out the effect of thin markets (Fig. 1.2). We also consider the difference in herding<br />

in the two groups, differentiating for characteristics of the investors, such as type and portfolio size (Fig. 1.3 and<br />

1.4).<br />

585


This time pattern could also be caused by market factors. Cleansing the coefficients from common factors, we<br />

are then able to approximately discriminate between intentional and unintentional herding. The last column of the<br />

previous Table 1 reports the average standardized coefficients for the three samples. The factors are determinants of<br />

the herding phenomenon, but the average adjusted betas are still all considerably significant, and continue to<br />

represent almost all of the convergence of behaviour. On average, the estimated adjusted beta is 0.448, as 98% of the<br />

total correlation measured by the average Sias' beta.<br />

These factors are significant determinants of the institutional demand especially for nonsplitting companies.<br />

Consistently with what stated above nonsplitting stocks are more sensible to market conditions. Passive strategies<br />

based on the four factors account significantly in the trading activities of nonsplitting companies, while splitting<br />

stocks might tend to be more actively traded, as they appear to be less affected by unintentional factors, as the<br />

adjusted beta still accounts for 93% of the total correlation (against 85%) with a significantly positive difference in<br />

mean.<br />

Figure 1.1 Average Splitting Dummies estimated per subperiod Figure 1.2 Average Splitting Dummies estimated per Number of Traders.<br />

ts<br />

n<br />

ie<br />

fic<br />

e<br />

c<br />

o<br />

y<br />

m<br />

u<br />

d<br />

g<br />

e<br />

ra<br />

a<br />

v<br />

e<br />

Figure 1.3 Average Splitting Dummies estimated per size of investor portfolio Figure 1.4 Average Splitting Dummies estimated per type of investors<br />

average dummy coefficients<br />

-.1 -.05 0 .05<br />

5<br />

,0<br />

0<br />

0<br />

,0<br />

0<br />

5<br />

,0<br />

-0<br />

0<br />

,1<br />

-0<br />

0,048 0,047<br />

1994-1997 1998-2001 2002-2005<br />

-.099257<br />

.010215<br />

.051335<br />

-0,092<br />

small investors medium investors<br />

big investors<br />

-.15 -.1 -.05 0<br />

average dummy coefficients<br />

average dummy coefficients<br />

-.1 -.05 0 .05<br />

-.029251<br />

-.089904<br />

-.037966<br />

-.059659<br />

.031022<br />

-.099528<br />

.022123<br />

-.154854<br />

>= 10 traders >= 20 traders<br />

>= 50 traders >= 100 traders<br />

banks insurance co.<br />

investment co. indip. advisors<br />

not defined<br />

Figure 1. The following graphs report more detailed investigations on the difference in herding among the two groups. We estimate the<br />

Sias’ model including a dummy variable for split, interacted with the lag institutional demand. The results report the average dummy<br />

coefficients by sub-period, by trading activity of the company, by size of the managed portfolio and by type of investors.<br />

3.2 Motivations to institutional herding<br />

We examining the impact of the four theoretical types of herding on the estimated correlation: informational<br />

cascades, reputational herding, characteristic herding and momentum trading. Table 2 reports the average estimated<br />

coefficients for the variables in all the different models. Table 3 includes the model specifications with the interacted<br />

dummies of splitting. In summary, in the overall market, herding is on average mostly affected by characteristics<br />

such as size, coverage, turnover, price, dividend and past returns. Different considerations can be drawn for the<br />

splitting sample.<br />

586<br />

-.02936


1) Splitting companies<br />

Variables<br />

mean t mean t mean t mean t mean t<br />

Δi,t-1 0.4170 *** 14.63 -0.2413 -0.79 0.2405 1.63 0.4278 *** 13.02 0.2155 0.23<br />

Dispersion*t-1 0.4378 ** 2.01 -2.1512 -0.96<br />

Coverage*t-1 0.1728 1.30 0.3278 0.81<br />

Size*Coverage*t-1 0.4031 0.97 -2.8701 -1.21<br />

Size*i,t-1 -0.1744 -0.43 0.2979 *** 4.82 3.4335 1.23<br />

Price*i,t-1 -0.1120 -1.16 -2.0436 -1.13<br />

Turnover*i,t-1 -0.0514 -0.60 -0.5519 -1.27<br />

StDeviation of returns*i,t-1 0.1340 1.25 3.3859 1.04<br />

Returns*i,t-4 0.0172 0.46 -0.0276 -0.42<br />

Dividends*i,t-1 -0.0218 -0.35 -0.0623 -0.28<br />

Returns*i,t-1 -0.0283 -1.19 -0.0867 -1.12<br />

2) Nonsplitting companies<br />

Average Estimated Coefficients for all Models<br />

Dependent variable: Δ S i,t<br />

Sias' model Informational- based Characteristic- based<br />

Dependent variable: Δ NS i,t<br />

Momentum Unifying model<br />

Variables<br />

Sias' model Informational- based Characteristic- based Momentum Unifying model<br />

mean t mean t mean t mean t mean t<br />

Δi,t-1 0.4749 *** 56.68 0.4155 *** 29.15 0.3721 *** 18.70 0.4660 *** 56.34 0.3759 *** 16.04<br />

Dispersion*t-1 -0.0139 -1.40 0.1378 -1.02<br />

Coverage*t-1 0.1071 *** 10.93 -0.0086 *** 11.13<br />

Size*Coverage*t-1 -0.0801 *** -6.25 0.1010 *** -6.18<br />

Size*i,t-1 0.1467 *** 11.65 0.0935 *** 15.23 -0.0732 *** 11.17<br />

Price*i,t-1 0.0708 *** 7.92 0.0526 *** 6.81<br />

Turnover*i,t-1 0.0392 *** 3.99 0.0082 0.94<br />

StDeviation of returns*i,t-1 -0.0211 -1.19 -0.0336 * -1.64<br />

Returns*i,t-4 0.0035 0.60 0.0056 1.09<br />

Dividends*i,t-1 0.0115 ** 2.09 0.0214 *** 3.87<br />

Returns*i,t-1 0.0530 *** 6.74 0.0522 *** 6.44<br />

Table 2. The table reports the average standardized coefficients of all the variables used in the five models. (1) Sias' model regresses the<br />

institutional demand Δi,t on its lag Δi,t-1 only. (2) Informational-based models regress the institutional demand on its lag and a set of<br />

proxies for the quality of information, such as size, dispersion, coverage and size*coverage at the previous quarter. (3) Characteristicsbased<br />

models regress the institutional demand on its lag and a set of company characteristics, such as size, price, turnover, standard<br />

deviation, returns of stocks and quarterly dividends, measured at the previous quarter. (4) The Momentum models regress the<br />

institutional demand on its lag and the previous year returns. (5) Finally, an unifying model regresses the institutional demand on all the<br />

previous variables. The significance is attributed estimating the t statistics from the time series of the beta estimates. *10%, **5%,<br />

***1% of significance level.<br />

3.2.1 Informational Cascades<br />

The splitting sample presents interesting results, consistent with a predominance of informational-based herding.<br />

The model is well specified and the F tests on all the Informational regressors provide evidence of the significance<br />

of the proxies for informational-based herding in most of the quarters. Moreover, the signs of the estimated<br />

coefficients of the proxies confirm that herding increases as the quality of the information available decreases.<br />

Particularly important is the dispersion of beliefs among analysts. In fact, the dispersion coefficient is on average<br />

positive and significant, explaining the most of the Sias' beta. Higher is the dispersion of beliefs in the quarter<br />

preceding the split, higher is the level of herding.<br />

The difference in the level of herding, between splitting and nonsplitting companies, remaining after addressing<br />

informational content, is significant, consistently with the hypothesis that stock splits announcement convey<br />

information to the market, creating the environment of uncertainty for informational cascades to arise. Also, looking<br />

at the adjusted betas, cleaned from common factors, we substantiate even stronger conclusions, and the test for a<br />

positive difference in mean is significant at 1%.<br />

587


Average Estimated Coefficients for all Models (dummy specifications)<br />

Dependent variable: Δi,t<br />

Overall market with splitting dummies<br />

Variables<br />

Sias' model<br />

mean t<br />

Informational- based Characteristic- based<br />

mean t mean t<br />

Momentum<br />

mean t<br />

Unifying model<br />

mean t<br />

Δi,t-1 0.4743 *** 56.79 0.4116 *** 28.68 0.3671 *** 18.52 0.4655 *** 56.47 0.3709 *** 15.66<br />

Dispersion*t-1 -0.0134 -1.38 -0.0092 -1.07<br />

Coverage*t-1 0.1095 *** 11.05 0.1035 *** 11.31<br />

Size*Coverage*t-1 -0.0814 *** -6.05 -0.0768 *** -6.49<br />

Size*i,t-1 0.1549 *** 11.67 0.1027 *** 17.80 0.1476 *** 12.09<br />

Price*i,t-1 0.0726 *** 8.02 0.0550 *** 6.99<br />

Turnover*i,t-1 0.0406 *** 4.26 0.0096 1.14<br />

StDeviation of returns*i,t-1 -0.0174 -0.98 -0.0305 -1.51<br />

Returns*i,t-4 0.0030 0.51 0.0075 1.51<br />

Dividends*i,t-1 0.0109 ** 1.98 0.0196 *** 3.45<br />

Returns*i,t-1 0.0534 *** 6.72 0.0514 *** 6.22<br />

δi,tΔi,t-1 -0.0009 -0.19 -0.0034 -0.65 -0.0050 -1.03 0.0000 0.00 -0.0014 -0.26<br />

δi,t*Dispersion*t-1 0.0262 *** 2.52 -0.0378 -0.73<br />

δi,t*Coverage*t-1 -0.0090 -0.96 -0.0324 -1.26<br />

δi,t*Size*Coverage*t-1 0.0798 1.30 -0.0989 -0.65<br />

δi,t*Size*i,t-1 -0.0630 -1.04 0.0157 *** 2.74 0.1257 0.89<br />

δi,t*Price*i,t-1 0.0031 0.31 0.0406 1.12<br />

δi,t*Turnover*i,t-1 0.0013 0.19 0.0242 1.18<br />

δi,t*StDeviation of returns*i,t-1 0.0139 * 1.93 0.0065 0.40<br />

δi,t*Returns*i,t-4 0.0041 1.30 0.0055 1.26<br />

δi,t*Dividends*i,t-1 -0.0109 * -1.83 -0.0200 *** -2.61<br />

δi,t*Returns*i,t-1 -0.0124 *** -2.72 -0.0151 *** -2.45<br />

Table 3. The table reports the standardized coefficients of all the variables used in the five models. (1) Sias' model regresses the<br />

institutional demand Δi,t on its lag Δi,t-1 only. (2) Informational-based models regress the institutional demand on the lag demand and a<br />

set of proxies for the quality of information, such as size, dispersion, coverage and size*coverage at the previous quarter. (3)<br />

Characteristics-based models regress the institutional demand on its lag and a set of company characteristics, such as size, price,<br />

turnover, standard deviation, returns of stocks and quarterly dividends, measured at the previous quarter. (4) Momentum models regress<br />

the institutional demand on its lag and the previous year returns. (5) Finally, an unifying model regresses the institutional demand on all<br />

the previous variables. The significance is attributed estimating the t statistics from the time series of the beta estimates. *10%, **5%,<br />

***1% of significance level.<br />

3.2.2 Positive Feedback Herding<br />

Checking for positive feedback herding, we look at first at characteristics-based herding. The remaining betas are<br />

still positive and significant. Moreover, looking at the regressors coefficients, we see that company characteristics<br />

have an important effect on the convergence in the overall market. The F tests on the regressors mainly reject the<br />

null of non-joint significance for all samples. Moreover, the signs of the coefficients are consistent with the<br />

hypothesis that institutions tend to herd on nonsplitting companies, preferring large companies, with high price-pershare,<br />

high turnover and high dividends.<br />

On splitting companies, the results are weaker than the previous informational-based herding. Even if the F tests<br />

show a joint significance of the regressors in explaining the convergence of behaviour, only size has a significant<br />

and positive coefficient, while all the other coefficients are rejecting the hypothesis of characteristics-based<br />

convergence. The difference in means with nonsplitting companies is also not significant.<br />

In the class of positive feedback herding, we also investigate the presence of momentum trading. Momentum<br />

strategies have an effect on the convergence towards stocks, as the coefficients of the past returns are mostly<br />

significant. However, the effect is smaller with respect to the other types, especially for the splitting sample.<br />

588


Looking at the difference between groups, the adjusted betas have also interesting results. The difference in<br />

means between the samples is significant at 1%, therefore, once past returns are accounted for and the effect of<br />

passive strategies is cleared out, investors herd more on splitting companies for reasons other than momentum.<br />

3.3 The stabilizing effect of herding<br />

Finally, we look at the effect of herding on the future performance of the companies. We regress the institutional<br />

demand on the past quarter, the same period, and on the two consecutive quarters’ returns after the measurement<br />

period. Consistently with the literature, we observe a positive relationship between institutional demand and past<br />

quarter and same quarter returns for the overall market and for nonsplitting companies. We do not conclude for any<br />

similar relationship for the splitting companies (Table 4).<br />

Instead, interestingly, we observe a positive and highly significant relation between institutional demand and<br />

returns in the two following quarters for splitting firms (0.3704 and 0.3692 respectively). This is consistent with a<br />

stabilizing effect on prices for splitting companies due to herding. According to Sias (2004), such a positive relation<br />

is further evidence of the presence of informational-based herding.<br />

1) Splitting companies: Dependent variable: Δ S i,t<br />

Variables<br />

Mean<br />

estimated<br />

Stabilizying Effect Models<br />

se t min max<br />

Δi,t-1 0.4057 *** 0.0470 8.63 -0.3454 1.4248 26 0<br />

Rett-1 0.0819 0.2240 0.37 -5.0132 3.2220 9 3<br />

Rett 0.0035 0.2140 0.02 -5.2177 2.8125 8 1<br />

Rett+1 0.3704 *** 0.1760 2.11 -1.8904 3.7678 8 2<br />

Rett+2 0.3692 *** 0.1794 2.06 -3.5798 3.3968 7 1<br />

Rett+4 0.1201 0.1904 0.63 -4.1861 3.9397 4 5<br />

2) Non-splitting companies: Dependent variable: Δ NS i,t<br />

Significant<br />

pos. qrts.<br />

Mean Significant<br />

Variables<br />

se t min max<br />

estimated<br />

pos. qrts.<br />

Significant<br />

neg. qrts.<br />

Significant<br />

neg. qrts.<br />

Δi,t-1 0.4484 *** 0.0081 55.18 0.3633 0.5552 44 0<br />

Rett-1 0.2212 *** 0.0395 5.61 -0.3450 0.8020 23 1<br />

Rett 0.3420 *** 0.0369 9.27 -0.2119 0.8572 31 0<br />

Rett+1 -0.0015 0.0342 -0.04 -0.5669 0.6303 10 6<br />

Rett+2 0.0100 0.0301 0.33 -0.3770 0.4235 5 4<br />

Rett+4 -0.0337 0.0289 -1.17 -0.4689 0.3463 3 5<br />

Table 4. The table reports the summary statistics of the coefficients of the lag institutional demand estimated in each quarter of<br />

analysis, from 1994 to 2005, for four Stabilizing Models. Model (1) regresses the institutional demand (as fraction of buyer of<br />

stock i at quarter t) on the lag institutional demand for the overall market and past quarter, same quarter, following two quarters<br />

and following year returns. Model (2) regresses the institutional demand on its lag, the previous set of investors and a set of<br />

splitting dummies interacted with all the regressors. Models (3) and Model (4) regress the same specification as Model (1) in<br />

only the splitting sample and nonsplitting sample respectively. The t-values reported are computed from the standard errors of<br />

the estimates series. The numbers of significant quarters are identified with a 10% of significance level. *10%, **5%, ***1% of<br />

significance level.<br />

4 Summary<br />

With this empirical paper we aim to contribute to the understanding of stock splits and their market reaction, in the<br />

light of the impact of herding, or correlated trading decisions among institutional investors.<br />

We have found evidence of positive and significant correlation in each quarter of analysis. Also, cleaning for<br />

intentional correlation, we have found that most of the correlation is not attributable to the four factors, namely size,<br />

market return and book-to-market, that proxy for unintentional or spurious herding.<br />

589


Distinguishing between herding in splitting and nonsplitting companies in the overall case, we do not evince a<br />

significant difference on average. However, the difference in correlation decreases over the three four-year<br />

subperiods, going from a positive but not significant average in the subperiod 1994 - 1998, to a negative (but still<br />

not significant) average in 2002-2005.<br />

However, we find a difference in the motivation to herd, being splitting companies more affected by<br />

informational-based herding. The presence of informational-based herding, especially for splitting companies, is<br />

also confirmed by observing the relation between institutional demand and future returns. Moreover, the positive<br />

relation we find between institutional demand for splitting firms and their future returns in the following two<br />

quarters, is consistent with the stabilizing effect of herding. On the contrary, we do not report any significant<br />

relationship between institutional demand and future returns in the nonsplitting sample or in the overall market.<br />

Our results are therefore consistent with the presence of informational content in the split event and to the<br />

underreaction of the market. We should also note however, that this underreaction is affected by trading on herd.<br />

Still a significant part of the correlation among investors and of the difference between the subsamples is not<br />

explained by these four types, suggesting that further studies should be carried out to better understand other<br />

motivations, probably irrational, to the phenomenon. Moreover, further development of this research will focus on<br />

investigating both the change in herding and its impact on the future performance of the company on the days<br />

around the announcement of stock splits.<br />

5 References<br />

Avery, Christopher and Peter Zemsky (1998), ‘Multidimensional Uncertainty and Herd Behaviour in Financial<br />

Markets’, The American Economic Review 88(4), 724-748.<br />

Bikhchandani, Sushil, David Hirshleifer and Ivo Welch (1992), ‘A Theory of Fads, Fashion, Custom, and Cultural<br />

Change as Informational Cascades’, Journal of Political Economy 100(5), 992-1026.<br />

Carhart, Mark M. (1997), ‘On Persistence in Mutual Fund Performance’, The Journal of Finance 52(1), 57.<br />

Chan, Kalok, Chuan-Yang Hwang and Mujtaba G. Mian (2005), ‘Mutual fund herding and Dispersion of Analysts’<br />

Earnings Forecasts’, Working Paper February, 1-41.<br />

Dasgupta, Amil, Andrea Prat and Michela Verardo (2011), ‘The Price Impact Of Institutional Herding’, Review of<br />

Financial Studies 24 (3), 892-925<br />

Gompers, Paul A. and Andrew Metrick (2001), ‘Institutional Investors and Equity Prices’, Quarterly Journal of<br />

Economics 116(1), 229-259.<br />

Grinblatt, Mark, Sheridan Titman and Russ Wermers (1995), ‘Momentum Investment Strategies, Porfolio<br />

Performance, and Herding: A Study of Mutual Fund Herding’, The American Economic Review 85(5), 1088-<br />

1105.<br />

Ikenberry, David L. and Sundaresh Ramnath (2002), ‘Underreaction to Self-Selected News Events: The Case of<br />

Stock Splits’, Review of Financial Studies 15(2), 489-526.<br />

Lakonishok, Josef, Andrei Shleifer and Robert Vishny (1992), ‘The impact of institutional stock prices’, Journal of<br />

Financial Economics 32, 23-43.<br />

Lakonishok, Josef and Theo Vermaelen (1986), ‘Tax- induced trading around ex-dividend’, Journal of Financial<br />

Economics 16, 287-319.<br />

Nofsinger, John R. and Richard W. Sias (1999), ‘Herding and Feedback Trading by Institutional and Individual<br />

Investors’, Journal of Finance 54(6), 2263-2295.<br />

590


Scharfstein, David S. and Jeremy C. Stein (1990), ‘Herd Behaviour and Investment’, The American Economic<br />

Review 8(3), 465-479.<br />

Sias, Richard W. (2004), ‘Institutional Herding’, Review of Financial Studies 17(1), 165-206.<br />

Sias, Richard W. (2007), ‘Reconcilable Differences: Momentum Trading by Institutions’, The Financial Review 42,<br />

1-22.<br />

Wermers, Russ (1999), ‘Mutual Fund Herding and the Impact on Stock Prices’, The Journal of Finance 54(2), 581-<br />

622.<br />

591


TIME-VARYING BETA RISK FOR TRADING STOCKS OF TEHRAN STOCK EXCHANGE IN IRAN:<br />

(A COMPARISON OF ALTERNATIVE MODELING TECHNIQUES)<br />

Majid Mirzaee Ghazani<br />

University of Tehran, Faculty of Economics<br />

E-mail: MajidMirza@ut.ac.ir http://economics.ut.ac.ir/<br />

Abstract. This paper investigates the time-varying behavior of systematic risk of 30 stocks in the form of 10 different sectors that are<br />

trading in Tehran stock exchange (TSE).The daily data is used for this study and the Period is from July 2003 to December 2009.<br />

Three different modeling techniques in addition to the standard constant coefficient model are employed: a bivariate t-GARCH (1, 1)<br />

model, two Kalman filter based approaches as well as a bivariate stochastic volatility model estimated via the efficient Monte Carlo<br />

likelihood technique. A comparison of the different models' ex-ante forecast performances indicates that the random-walk model in<br />

the Kalman filter approach is the preferred model to describe and forecast the time-varying behavior of stocks’ betas in Tehran stock<br />

exchange (TSE).<br />

Keywords: time-varying beta risk; Kalman filter; bivariate t-GARCH; stochastic volatility; efficient Monte Carlo likelihood.<br />

JEL classification: C10; C58; G10; G12; G17.<br />

1 Introduction<br />

Beta is a parameter that is mostly used by financial economists and practitioners to measure an asset’s or portfolio’s<br />

risk. This parameter is a measure of systematic risk, the non-diversifiable portion of the variability in returns in the<br />

Capital Asset Pricing Model (CAPM) which originally developed by Sharpe and Lintner (1965). The main<br />

assumption that is considered for calculation of the betas in this context is the time-invariant assumption or<br />

constancy of beta’s CAPM models. At this approach, betas have been estimated via ordinary least squares (OLS).<br />

However, there are considerable literatures that have focused on the dependency of systematic risk of an asset on<br />

macroeconomic factors and at last have rejected the assumption of beta stability, e.g., Bos and Newbold (1984),<br />

Collins et al. (1987), Brooks, Faff and Lee (1992), Pope and Warrington (1996), Choudhry (2005), Choudhry et al<br />

(2010).<br />

Due to excess kurtosis and leptokurtic nature of financial time series and weak performance of time-invariant<br />

Betas, several studies in recent decades have been deployed to show the time-varying behavior of systematic risk<br />

that some of these studies mentioned above.<br />

In this paper, different techniques have been employed to analyze the time-varying behavior of betas in 30<br />

diverse stocks and in the form of 10 sectors 1 . The first technique that has been utilized to estimate time varying betas<br />

is based on the multivariate general autoregressive conditional heteroskedasticity (M-GARCH) model which has<br />

been proposed by Bollerslev (1990). A GARCH (1,1) model is used to estimate conditional variances for generating<br />

of the series of conditional time-varying betas. This model has been applied in different studies to estimate timevarying<br />

betas, e.g., Giannopoulos (1995), Brooks et al. (1998), Faff et al. (2000), Li (2003), Choudhry and Wu<br />

(2007), Mergner and Bulla (2008), Choudhry et al. (2010).<br />

The second technique which is employed to model the time-varying systematic risk of selected sectors is based<br />

on another type of volatility-based approaches that as known, Stochastic Volatility (SV) models. Unlike the GARCH<br />

models which define the time-varying variance as a deterministic function of past squared innovations and lagged<br />

conditional variances; In the SV models, the variance is modeled as an unobserved component that follows some<br />

stochastic process. SV models are reviewed in, e.g., Harvey et al. (1994), Danielsson (1994), Ghysels et al. (1996),<br />

Shephared (1996), Jacquier et al. (1999), Broto and Ruiz (2004), Asai et al. (2006).<br />

1 - Because some studies indicate the usefulness of investigating betas on sector; instead of stock level, e.g., Yao and Gao (2004).<br />

592


The last method which is utilized in this paper to model time-varying betas is based on State-space approaches.<br />

One of the well known techniques that are used in this approach is known as Kalman Filter (KF). The main<br />

difference between this technique and SV model is latented in the process of estimation. By employing the Kalman<br />

Filter, the time-varying betas can be estimated directly, while in SV model this process should be done indirectly by<br />

utilizing estimated conditional variance series. This paper has focused on two forms of transition equation in KF<br />

approach, which are known as Random Walk (RW) and Mean-Reverting (MR) models and have been applied in<br />

several studies, e.g., Wells (1994), Groenewold and fraser (1999), Mergner and Bulla (2008), He and Kryzanowski<br />

(2008).<br />

The reminder of this paper is organized as follows. The specification of competing techniques discussed in<br />

section 2. The data and descriptive analysis are discussed in section 3 and the empirical results mentioned in section<br />

4. Conclusion and main implication of results are presented in section 5.<br />

2 Methodology<br />

2.1 The unconditional beta in the CAPM<br />

Following the Sharpe (1964) and Lintner (1965), an asset’s unconditional beta in the CAPM can be estimated via<br />

OLS:<br />

j=1, 2, , N<br />

t= 1, 2, , T (1)<br />

Where is the expected rate of return on sector j, is the risk-free rate, is the expected<br />

rate of return on the market portfolio and is calculated as follows:<br />

In this notation, and in order, represent excess returns on sector j and market portfolio.<br />

2.2 Estimation of Conditional Betas by GARCH Model<br />

In traditional CAPM is assumed that return series are independently and identically distributed (IID), while in<br />

practical, there are signs of autocorrelation and volatility clusters that violate this assumption.<br />

One of the main characters of financial time series is the existence of a phenomenon that commonly referred to<br />

as volatility clustering and because of this feature, the volatility process is time-varying. In the meantime the<br />

GARCH models have been proposed to model and forecast time-varying structure of volatilities. Also the GARCHbased<br />

approach to model time-varying beta has been utilized in various studies, e.g., McClain et al. (1996), Lie et al.<br />

(2000), Faff et al. (2000), Brooks et al. (2002), Li (2003), Marti (2006), Choudhry and Wu (2007), Mergner and<br />

Bulla (2008), Choudhry et al (2010). Furthermore, we should declare that multivariate GARCH (M-GARCH)<br />

model, which first was proposed by Bollerslev (1990); plays a key role to modeling and estimating of betas.<br />

According to Bauwens et al. (2006), we can distinguish three approaches for constructing M-GARCH models;<br />

(1) Direct generalization of the univariate GARCH model of Bollerslev (1986). In this approach we have VEC,<br />

BEKK and Factor models; (2) Linear combinations of univariate GARCH models, which we have orthogonal and<br />

latent Factor models; (3) Non-linear combinations of univariate GARCH models; for this category we can mention<br />

Constant Conditional Correlation (CCC) and Dynamic Conditional Correlation (DCC) models along with General<br />

Dynamic Covariance (GDC) and Copula-GARCH models.<br />

In this paper a bivariate version of the multivariate GARCH (M-GARCH) model is applied to calculate timevarying<br />

betas. Also a system of two conditional mean equations has been considered as follows:<br />

593<br />

(2)


Where represents a vector that declares the excess returns of sector j and exposes the<br />

excess return of the market portfolio. Also is a vector of constants and denotes a vector<br />

of innovations that conditioned by the complete information set of . As follows, we consider a general<br />

bivariate GARCH model that is shown in below equation:<br />

Where<br />

E , E (5)<br />

As mentioned earlier in (2.2) section; there are different methods to specify or conditional variance matrix,<br />

that we categorized them in three groups. In this paper, we employ Constant Conditional Correlation (CCC) model<br />

that has been proposed by Bollerslev (1990). By the using of this method, we can reduce the complexity of<br />

computation of general multivariate GARCH (1,1) models. The (CCC) model is defined as:<br />

Where<br />

And R is a systematic positive definite matrix with . In the bivariate case, the number of<br />

parameters will be seven numbers. The calculation of this parameters are performed by OX 6 of Doornik (2009) and<br />

the G@RCH 6 package of Laurent (2009). Due to exhibition of leptokurtosis and skewness of sectors’ excess returns,<br />

which have been revealed in table (1) and also mentioned by Sandman and Koopman (1998) and Mergner and Bulla<br />

(2008); we have chosen a t-GARCH model that‘t’ in this definition, refers to the student’s t-distribution for<br />

innovation in equation (4). By considering of these conditions, we can calculate time-varying beta of t-GARCH<br />

model as follows:<br />

In this equation, is the conditional covariance between sector j and the market portfolio and also<br />

shows conditional market portfolio’s variance.<br />

2.3 Estimation of Conditional Betas by Stochastic Volatility (SV) Model<br />

An alternative way of modeling conditional time-varying betas is offered by Stochastic Volatility (SV) models<br />

which are introduced by Taylor (1982, 1986). This type of models has some features that distinct them from<br />

GARCH-type models (Asai et al., 2006). Some of major differences between these two approaches are mentioned<br />

below:<br />

i) In SV models the volatility process is Random, whereas in GARCH-type models conditional variance of returns<br />

is assumed to be a deterministic function of past returns. (Asai et al, 2006)<br />

ii) An unobserved shock added to return variance of SV models that makes them more flexible than GARCH-type<br />

models. (Kim et al, 1998)<br />

iii) The GARCH-type models are observation-driven while SV models are parameter-driven. (Mergner, 2009)<br />

Because of these properties of SV models, we tried to use them for modeling of time-varying betas.<br />

We can represent a general SV model in mean and variance equation. The mean equation is given by<br />

594<br />

(3)<br />

(4)<br />

(6)<br />

(7)<br />

(8)


t = 1, (9)<br />

Where is the mean adjusted return on asset j. Also the variance equation is given by<br />

In this equation, is known as positive scaling factor. The stochastic process for is modeled as an AR (1)<br />

process as follows:<br />

Persistence parameter is restricted to a positive value less than one to ensure stationarity.<br />

Because the conditional covariance in SV models is latent and should be integrated out from the joint likelihood<br />

function of excess returns and the conditional covariance, parameters in this type of models, cannot be estimated by<br />

a direct standard Maximum Likelihood technique. Therefore, numerous techniques have been developed to solve<br />

this barrier. They include a set of models, from GMM which proposed by Taylor (1986) and quasi-Maximum<br />

Likelihood (QML) that introduced by Harvey et al, (1994) to more efficient methods like Monte Carlo Markov<br />

Chain (MCMC) Which proposed by Jacquier (1994), Monte Carlo Likelihood (MCL) proposed by Danielsson<br />

(1994) and efficient MCL that developed by Sandmann and Koopman (1998).<br />

In this paper we have used the efficient MCL method for estimation of SV model. The calculations for<br />

estimating of time-varying betas have performed by OX 6 of Doornik (2009) and the ssfpack package by Koopman et<br />

al. (1999). Equation (12) in below, represents the formula for estimating of sector’s betas.<br />

Where in this equation is the correlation coefficient between excess returns of sector j and market portfolio.<br />

2.4 Estimation of Conditional Betas by Kalman Filter Method<br />

The last method we employ in this paper to estimate time-varying betas, is one of the state-space models that is<br />

called the Kalman Filter (KF). This method has been applied in different studies to estimate time-varying betas, e.g.,<br />

Bos and Newbold (1984), Faff et al. (2000), Yao and Gao (2004), Mergner and Bulla (2008), Adrian and Franzoni<br />

(2009).<br />

Using of Kalman Filter method has some benefits in contrast to models which mentioned in last sections<br />

(GARCH-type and SV models). The first advantage back to state-space models that allows us to estimate the betas<br />

directly (Mergner, 2009). At second, the Kalman Filter converges quickly and the underlying model has no effect on<br />

the converging process (Yao and Gao, 2004).<br />

The Kalman Filter estimates the conditional beta by using the following structure:<br />

Equation (13) represents the observation or measurement equation of the state-space model. The other equation that<br />

declares the dynamic process of time-varying betas is called transition equation, which has been shown in equation<br />

(14) as follows:<br />

(14)<br />

595<br />

(10)<br />

(11)<br />

(12)<br />

(13)


The transition or state equation can be flexible and be shown as a random walk, random coefficients or a<br />

random mean model. In this study, two forms of transition equation for estimation of time-varying betas have been<br />

used; they include Random Walk (RW) and Mean-Reverting (MR) models, which have been specified as follows:<br />

Random Walk (15)<br />

Mean-Reverting (16)<br />

The Kalman Filter models, have been computed by OX 6 Doornik (2009) and the ssfpack package by Koopman<br />

et al. (1999).<br />

3 Data and Descriptive Analysis<br />

3.1 Data series of Excess Returns<br />

The data that is used in this paper represents the time series of daily excess returns 2 on 30 stocks that formed in 10<br />

diverse sectors. The time range of data is from July 2003 to December 2009. All the information and data about<br />

indices and sectors are gathered from Tehran Stock Exchange (TSE) and also the main index of TSE serves as a<br />

proxy for the market portfolio. The actual excess returns of Main Index and two sectors (pharmaceutical and<br />

machinery and equipments) stated in figure (1) in below.<br />

2 -The excess returns between period t and t-1 for index j are calculated as which in this equation is the<br />

risk-free rate of return based on rate of 1 year Deposit Account that converted to Daily returns.<br />

596


3.2 Descriptive Analysis<br />

Figure 1: Daily excess returns (in %) of Main Index and 2 sectors.<br />

The Descriptive Statistics for the data are presented in Table (1). It’s observable that the highest daily mean excess<br />

return is devoted to Basic Metals (0.13%) and the lowest is seen in Investment sector. Also it’s easily detectable that<br />

the data series of daily excess returns, show high level of kurtosis (leptokurtic). The Jarque-Bera statistic, which<br />

mentioned in last column, strongly satisfies the rejection of normality hypothesis for all return series at the 1%<br />

significance level.<br />

4 Empirical Results<br />

Sector N Mean Std. Dev. Skewness Kurtosis Jarque-Bera<br />

Main Index 1255 -0.0100 0.5999 0.04 11.61 7055<br />

Basic Metals 1255 0.1332 1.2372 6.38 87.81 4.1176e+005<br />

Pharmaceutical 1255 0.0236 0.5364 3.41 38.72 80853<br />

Food & Beverage 1255 0.0181 0.8633 -0.05 46.75 1.1429e+005<br />

Automobiles and Parts 1255 0.0258 0.9113 1.90 41.89 92513<br />

Chemical 1255 0.0714 1.0840 2.61 58.23 1.7875e+005<br />

Investment 1255 0.0073 0.8175 -0.12 17.76 16498<br />

Banks & Credit 1255 0.1305 1.8981 3.26 88.14 4.0852e+005<br />

Homebuilding 1255 0.0875 0.9769 0.40 10.44 5739<br />

Machinery & Equipments 1255 0.0267 1.0147 3.31 48.05 1.2304e+005<br />

Petroleum Products 1255 0.0306 0.8935 4.42 49.31 1.3124e+005<br />

Table 1: Descriptive Statistics of Daily excess returns<br />

4.1 Comparison of Alternative Methods’ Results on Beta Estimation<br />

In this section, we compare results of four diverse methods on estimation of betas; including time-invariant method<br />

(OLS) and three other techniques that are based on time-varying feature of betas. These three techniques that are<br />

mentioned earlier in section 2, are including of two volatility-based models (t-GARCH (1,1) and SV) and a statemodel.<br />

This state-space model is based on Kalman Filter (KF) approach that is investigated, in two forms of random<br />

walk and mean reverting specification.<br />

The results of unconditional and conditional betas have been presented in table (2). The conditional betas have<br />

been shown in mean, because their values varying across the time.<br />

sector * * * *<br />

Automobiles and Parts 1.125 1.174 1.146 1.241 1.191<br />

Banks & Credit 1.085 1.052 1.077 1.068 1.034<br />

Basic Metals 0.945 0.876 0.931 0.958 0.915<br />

Chemical 1.011 1.027 1.029 1.022 0.986<br />

Food & Beverage 0.865 0.907 0.891 0.886 0.876<br />

Homebuilding 1.135 1.118 1.107 1.113 1.124<br />

Investment 1.354 1.312 1.308 1.313 1.293<br />

597


Machinery & Equipments 1.252 1.237 1.206 1.231 1.189<br />

Petroleum Products 0.914 0.927 0.911 0.883 0.954<br />

Pharmaceutical 1.184 1.166 1.147 1.139 1.152<br />

*-The conditional beta series represented in mean<br />

4.2 Measurement of Forecasting Accuracy<br />

Table 2: The unconditional and conditional beta series on different sectors<br />

In an attempt to understand the relative performance of selected models in terms of their forecast accuracy and also<br />

to determine, which one of the mentioned approaches makes the best measurement for estimation of time-varying<br />

betas, we employ two well known criteria to evaluate in-sample forecast performance of alternative methods. These<br />

criteria are mean squared error (MSE) and the mean absolute error (MAE):<br />

And<br />

In these equations, denotes the series of forecasted excess returns for sector j. The results have been<br />

presented in Table (3) and Table (4). On the basis of the both criteria, we can see that calculation of beta by Kalman<br />

filter and through the random walk specification has revealed the best result. Also the value of MAE in KF approach<br />

and in the random walk model has the least quantity among other techniques. Furthermore, as visible in table (3), the<br />

random walk model is superior to other techniques in all sectors except Automobiles and Parts and Pharmaceutical<br />

that in these two sectors; the random walk model has been located in second place with a little difference.<br />

sector<br />

Automobiles and Parts 0.0146 0.0156 0.0152 0.0141 0.0155<br />

Banks & Credit 0.0110 0.0097 0.0086 0.0091 0.0093<br />

Basic Metals 0.0124 0.0117 0.0110 0.0112 0.0113<br />

Chemical 0.0122 0.0119 0.0114 0.0116 0.0120<br />

Food & Beverage 0.0102 0.0095 0.0092 0.0092 0.0093<br />

Homebuilding 0.0119 0.0119 0.0115 0.0117 0.0118<br />

Investment 0.0111 0.0098 0.0078 0.0084 0.0086<br />

Machinery & Equipments 0.0118 0.0098 0.00865 0.0095 0.0097<br />

Petroleum Products 0.0134 0.0129 0.01018 0.0123 0.0128<br />

Pharmaceutical 0.0117 0.0114 0.0113 0.0112 0.0117<br />

Average MAE 0.0120 0.0114 0.0105 0.0108 0.0112<br />

Table 3: In- sample mean absolute error (MAE)<br />

In addition to MAE criterion for investigating of forecasting performance, another benchmark (MSE) is applied<br />

and the results are represented in Table (4). Similar to the previous findings about supremacy of Kalman Filter<br />

technique (in random walk model), we can observe that this method has the least mean square error or MSE among<br />

rival techniques and also it shows the best performance in all sectors except Homebuilding and Machinery &<br />

Equipments.<br />

sector<br />

Automobiles and Parts 0.2845 0.3112 0.2145 0.2731 0.2914<br />

Banks & Credit 0.1863 0.1645 0.1518 0.1712 0.1685<br />

Basic Metals 0.3144 0.2953 0.2641 0.2761 0.2822<br />

Chemical 0.2664 0.2418 0.2218 0.2453 0.2719<br />

Food & Beverage 0.3651 0.3462 0.3387 0.3425 0.3712<br />

Homebuilding 0.1628 0.1586 0.1541 0.1517 0.1612<br />

Investment 0.4132 0.3825 0.2894 0.3466 0.3368<br />

598<br />

(17)<br />

(18)


5 Conclusions<br />

Machinery & Equipments 0.1918 0.2317 0.1894 0.1854 0.2019<br />

Petroleum Products 0.3154 0.2985 0.2616 0.2714 0.2916<br />

Pharmaceutical 0.2371 0.2281 0.2085 0.2114 0.2561<br />

Average MSE 0.2737 0.26584 0.22939 0.24747 0.26328<br />

Table 4: In- sample mean square error (MSE) [<br />

Several empirical evidences suggest that unconditional estimates of systematic risk are not stable over the time. Our<br />

results confirm these findings and show overwhelmingly that the betas are unstable and systematically time-varying.<br />

In this paper, we considered three alternative techniques to model this time variation in conditional betas: (i) a<br />

bivariate t-GARCH (1,1) model (ii) two Kalman Filter based approaches, including of random walk (RW) and mean<br />

reverting (MR) and (iii) a bivariate stochastic volatility (SV) technique.<br />

In our analysis, excess returns were forecasted in-sample, by using the conditional beta series that were<br />

generated by each of these techniques. The comparison of the in-sample forecast accuracy of each conditional beta<br />

technique indicates that time-varying sector betas are best described by the use of Kalman Filter in a random walk<br />

process. This result supports the findings of Mergner and Bulla (2008) that Kalman Filter technique is superior to<br />

other alternative approaches. Results that presented in this study advocates further research in this field, such as<br />

using exogenous factors for explaining of time-varying behavior of systematic risks, applying behavioral finance<br />

approaches and building of new methods by modifying the shortcomings of existing techniques.<br />

6 References<br />

Adrian, T., and F. Franzoni. 2009. Learning about beta: Time-varying factor loadings, expected returns, and the<br />

conditional CAPM. Journal of Empirical Finance, 16(4): 537-556<br />

Asai, M., M, McAleer, and J. Yu. 2006. Multivariate stochastic volatility: A Review. Economic Review, 25 (2-3):<br />

145-175.<br />

Bauwens, L., S., Laurent, and J., Rombouts. 2006. Multivariate GARCH Models: A Survey, Journal of Applied<br />

Econometrics, Vol. 21: 79–109<br />

Bollerslev, T. 1986. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31: 307–27.<br />

———. 1987. A conditionally heteroskedastic time series model for speculative prices and rates of return. Review of<br />

Economics and Statistics 69: 542–7.<br />

———. 1990. Modeling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH<br />

model. Review of Economics and Statistics 72, no. 3: 498–505.<br />

Bollerslev, T., R.F. Engle, and J.M. Wooldridge. 1988. A capital asset pricing model with time-varying covariances.<br />

Journal of Political Economy 96, no. 1: 116–31.<br />

Bos, T. and P. Newbold. 1984. An empirical investigation of the possibility of stochastic systematic risk in the<br />

market model. Journal of Business 57, no. 1: 35–41.<br />

Brooks, R.D., R.W. Faff, J., Lee. 1992. The Form of Time Variation of Systematic Risk: Some Australian Evidence,<br />

Applied Financial Economics, 2:191 – 198.<br />

Brooks, R.D., R.W. Faff, and M.D.McKenzie. 1998. Time-varying beta risk of Australian industry portfolios:<br />

comparison of modeling techniques. Australian Journal of Management 23, no. 1: 1–22.<br />

Choudhry, T. 2005. Time-Varying Beta and the Asian Financial Crisis: Investigating the Malaysian and Taiwanese<br />

Firms. Pacific-Basin Finance Journal, 13: 93-118.<br />

Choudhry, T., and H., Wu. 2007. Time-Varying Beta and Forecasting UK Company Stock Returns: GARCH Models<br />

vs. Kalman Filter Method, University of Southampton Press.<br />

Choudhry, T., Lu, L. and K. Peng. 2010. Time-Varying Beta and the Asian Financial Crisis: Evidence from the<br />

Asian Industrial Sectors. Japan and the World Economy, 22: 228-234.<br />

599


Collins, D.W., J. Ledolter, and J. Rayburn. 1987. Some further evidence on the stochastic properties of systematic<br />

risk. Journal of Business 60, no. 3: 425–48.<br />

Danielsson, J. 1994. Stochastic volatility in asset prices, estimation with simulated maximum likelihood. Journal of<br />

Econometrics 64: 375–400.<br />

Doornik, J.A. (2009). An Object-Oriented Matrix Language Ox 6, London: Timberlake Consultants Press.<br />

Durbin, J., and S.J. Koopman. 2001. Time series analysis by state space methods. Oxford Statistical Science Series.<br />

Oxford: Oxford University Press.<br />

Engle, R.F. 1982. Autoregressive conditional heteroskedasticity with estimates of the variance of UK inflation.<br />

Econometrica 50: 987–1008.<br />

Engle, R.F., and K.F. Kroner. 1995. Multivariate simultaneous generalized ARCH. Economic Theory 11: 122–50.<br />

Faff, R.W., D. Hillier, and J. Hillier. 2000. Time varying beta risk: An analysis of alternative modeling techniques.<br />

Journal of Business Finance and Accounting 27, no. 5: 523–54.<br />

Ghysels, E., A.C. Harvey, and E. Renault. 1996. Stochastic volatility.Vol. 14 of Handbook of Statistics, ed. G.S.<br />

Maddala and C.R. Rao, 128–98. Amsterdam: North Holland.<br />

Giannopoulos, K. 1995. Estimating the time varying components of international stock markets’ risk. European<br />

Journal of Finance 1: 129–64.<br />

Groenewold, N., and P. Fraser. 1999. Time-varying estimates of CAPM betas. Mathematics and Computers in<br />

Simulations 48: 531–9.<br />

Harvey, A.C., E. Ruiz, and N. Shephard. 1994. Multivariate stochastic variance models. Review of Economic Studies<br />

61: 247–64.<br />

He, Z., and L. Kryzanowski. 2008. Dynamic betas for Canadian sector portfolios. International Review of Financial<br />

Analysis 17: 1110–1122.<br />

Jacquier, E., N.G. Polson, and P.E. Rossi. 1994. Bayesian analysis of stochastic volatility models. Journal of<br />

Business and Economic Statistics 12: 371–89.<br />

Kim, S., N. Shephard, and S. Chib (1998). Stochastic volatility: Likelihood inference and comparison with ARCH<br />

models. Review of Economic Studies 65 (3), 361-393.<br />

Koopman, S.J., N. Shephard, and J.A. Doornik. 1999. Statistical algorithms for models in state space using SsfPack<br />

2.2. Econometrics Journal 2: 113–66. http://www.ssfpack.com.<br />

Laurent S. (2009). G@RCH 6, Estimating and Forecasting ARCH Models. London: Timberlake Consultants Press.<br />

Li, X. 2003. On unstable beta risk and its modeling techniques for New Zealand industry portfolios. Working Paper<br />

03.01, Massey University Commerce, Auckland, New Zealand.<br />

Lie, F., R. Brooks, and R. Faff. 2000. Modeling the equity beta risk of Australian financial sector companies.<br />

Australian Economic Papers 39: 301–11.<br />

Lintner, J. 1965. The valuation of risky assets and the selection of risky investments in stock portfolios and capital<br />

budgets. Review of Economics and Statistics 47: 13–37.<br />

Mergner, S., J., Bulla. 2008. Time-varying Beta Risk of Pan-European Industry Portfolios: A Comparison of<br />

Alternative Modeling Techniques, European Journal of Finance, Vol. 14:771–802.<br />

Mergner, S. 2009. Applications of State Space Models in Finance. G ttingen University Press.<br />

Pope, P. and M. Warrington. 1996. Time-varying properties of the market model coefficients. Accounting Research<br />

Journal 9: 5-20<br />

Sandmann, G., and S.J. Koopman. 1998. Estimation of stochastic volatility models through Monte Carlo maximum<br />

Likelihood. Journal of Econometrics 87: 271–301.<br />

Sharpe, W.F. 1964. Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance<br />

19:425–42.<br />

Wells, C. 1994. Variable betas on the Stockholm exchange 1971–1989. Applied Economics 4: 75–92.<br />

Yao, J. and J. Gao. 2004. Computer-intensive time-varying model approach to the systematic risk of Australian<br />

industrial stock returns. Australian Journal of Management 29, no. 1: 121–46.<br />

600

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!