Search interesting materials

Wednesday, August 31, 2016

Sequencing in the construction of State capacity: Walk before you can run

by Ajay Shah.

In thinking about the State, there are two useful principles:

  1. We should embark on things that we can do (i.e. don't take on things that we don't have the ability to do); and
  2. We should walk before we run (i.e. do simple things, achieve victory, then move on to a more complex problem).

These simple and obvious ideas have interesting implications for how we think about doing public policy and public administration in a place like India, where there is a crisis of State capacity, where numerous policy initiatives break down in the implementation.

Premature load bearing


Pritchett, Woolcock and Andrews, 2010 talk about `premature load bearing' as a source of implementation shortfall. Their metaphor is a bridge that is built to a certain limited capability. If you run a truck on the bridge which has a weight that is too high, the bridge comes crashing down. In this example, the concept of `load' and `load bearing capacity' is quite clear.

Example. We commission a government facility that registers land transactions. This is inadequately sized. The staff collapses under the crushing pressure of a large number of transactions. A black market develops where some people pay bribes to get ahead. The staff is partly super-busy fire-fighting, and have no time to think about fixing the broken system. The staff is also happy to receive a steady flow of bribes and lacks incentive to fix the broken system. The load was too high, the capacity was inadequate, and the system collapsed.

What is load in public administration?


With a bridge, the load is clear: the mass of the vehicles that run on the bridge. How do we think about load in the public policy context? What are easy problems vs. what are difficult problems? Pritchett and Woolcock, 2003 suggest two dimensions of load: high transactions and high discretion. The example above (land titling office that collapsed) is a simple example where the question was of transaction processing capability. That is an easy one to comprehend. But we should look beyond this simple engineering perspective of counting the number of transactions. A public administration problem is more difficult when front-line officials have more discretion.

There is a valuable third dimension to thinking about load, which is to think about the stakes. What is at stake? What could a corrupt official stand to gain? It is easier to run a system where the stakes are low. As an example, the personal gains to a school teacher from being absent half the time at school are relatively small. While it's hard to make school teachers show up to work and to teach well (it is a transaction-intensive discretion-intensive service), this is not that hard, as the stakes are low. But the gains to a bad tax official can be 1000 times larger than the gains to a bad school teacher.

Thinking about the incentives of the civil servant takes us to thinking about load that comes from the magnitude of the principal-agent problem between the objectives of the organisation and the objectives of the individuals that man it. The load that is placed upon a system is the extent to which the objectives of the individuals diverge from the objectives of the organisation.

Example: Parking enforcement


Every economics student is exposed to the problem of enforcing self-service parking meters. Cars are supposed to pay Rs.10 for parking. Suppose we do not police compliance pervasively. Suppose there is only a 0.01% probability of getting caught. Let's set the fine at Rs.1000. Risk averse persons will then prefer to pay the fee for sure (i.e. pay Rs.10) instead of taking the risk of losing Rs.1000.

The attractive thing about this fable is that it shows us the path to a small traffic police force. Instead of having a large number of policemen watching all cars, we can get by with limited enforcement. We can have a small government and yet get the job done. At first blush, you would think things are always easier in public administration with a small police force, which calls for a smaller number of transactions.

The story changes significantly when we worry about the principal-agent problem between the police department and the front-line enforcer of the fine. Do we have the ability to create checks and balances where the police will actually collect a fine of Rs.1000? Or will this collapse in pervasive corruption?

The magnitude of the fine is the load that the system is placed under. When the fine is small, e.g. Rs.100, the policeman has the choice of taking a bribe of Rs.50 and catering to his personal interest, or insisting on the fine of Rs.100. This is a small divergence between self-interest and the objectives of the organisation. But when the fine is Rs.1000 or Rs.10,000, the gap between the two enlarges. The system is placed under greater load.

Suppose the user charge is $u$, the probability of getting caught is $p_1$ and the fine is $F$. In the textbook, we try to make $p_1$ small, so as to have a small police force, and compensate by ratcheting up $F = u/p_1$. However, large values of $F$ are a large load upon the system. We have to ask ourselves whether we have designed a public administration mechanism that is able to deal with a large $F$.

What is load bearing capacity?


We are asked to build a bridge that will be strong enough to take 10 main battle tanks weighing 60 tonnes each. Now we must pull together an elaborate array of design features in the bridge, so that it is able to cope with this load.

In similar fashion, once we know about the number of transactions, the extent of discretion of front-line officials, and about the stakes, we have a characterisation of the load. What are the elements that shape load bearing capacity?

The simplest question is transaction processing capacity. If there are 100 land market transactions a day, we must build a facility that will have commensurate capacity. The second dimension is discretion: if systems can be designed which reduce discretion, this will increase the load-bearing capacity. In some situations, IT systems can remove discretion and thus remove one dimension of the load.

The most important element of load bearing capacity is to think about the stakes, i.e. the maximisation of the official. Public administration is about establishing processes so that the organisation achieves its goals even though individuals have divergent personal interests. The task of management is to reshape incentives so that the narrow self-interest of employees gets aligned with the objectives of the organisation. The quality of the processes, and the checks and balances that have been designed, determine the load-bearing capacity.

Let's go back to the parking fine problem. What shapes the thinking of the policeman? There is a probability $p_2$ that he gets caught if he asks for a bribe. From his point of view, if $p_2$ is high and $F$ is low, then it's safer to just enforce the fine. As the fine gets bigger, he is more tempted to ask for a bribe. Under good conditions of public administration, $p_2$ is high. If a cop in London asks for a bribe, there's a good chance that he gets caught. Our job in public administration is to build the checks and balances so that $p_2$ goes up. The load-bearing capacity of the system is reflected in $p_2$.

There is a tension between making a problem easier by reducing the number of transactions vs. making a problem easier by reducing the stakes. E.g. when a parking fine goes from Rs.1000 to Rs.100, the number of fines (i.e. the number of transactions and citizen-interfaces) has to go up by 10 times.

Consequences of premature load bearing


When an organisation is asked to deal with load that goes beyond its load-bearing capacity, what results is a rout: `a collapse of organisational coherence and integrity'. While lip service to the goals of the organisation continues, on the ground, there is an every day reality of a large divergence between the behaviour of individuals in the organisation versus the objectives of the organisation.

Once an organisation collapses in this fashion, it shifts into a low level equilibrium of pervasive rent collection. All that goes on is the abuse of coercive power of the State in favour of laziness and corruption by the persons manning and wielding the instruments of power. These rents can often get ingrained into a new political arrangement, and political incentives for preservation of the status quo. Pritchett et. al. thus encourage us to watch out for premature load bearing, particularly because it can lead to sustained and persistent implementation shortfall, and create an incumbent set of players who are the biggest obstacles to fixing things.

We see this with entrenched bureaucracies in many areas in India. Premature load bearing led to an organisational rout, and in the wreckage we now have incumbents that man the State machinery who are zealously defending the corruption and laziness.

Walk before you can run


We should build the parking enforcement system through the following constructive strategy:

  1. First, we would setup a parallel and independent measurement system to obtain data about the extent to which cars are not paying the user charge and the perception in the eyes of citizens that when they are caught, they pay a bribe instead of the fine.
  2. Next, we would build a large police force (i.e. high $p_1$) and a low $F$. We would put down all kinds of monitoring and checks and balances, in order to overcome the principal-agent problem. We would design accountability mechanisms to create pressure on the leadership and at every level of the police force, so as to get $p_2$ up.
  3. We would fight with implementation shortfall until the survey evidence shows us that the offenders are paying the fine and not the bribe.
  4. Only then would we announce We know how to run this system for a certain $F$ and $p_1$.
  5. Only then would we take the next step, of reducing the police force by 25% and increasing the fine to $1.25 F$. This will be assisted by behavioural changes among the police and citizens, who would have gotten more ingrained in good behaviour, where misbehaviour is more likely to set off alarms. 
  6. We would do this, one step at a time, pushing up the fine by 25% at each step, and continuously watching the survey evidence to look at the incidence of bribe-taking. At a step where the survey evidence shows that bribe-taking has gone up unacceptably, we would stop and undertake deeper reforms of the public administration mechanisms so as to push up $p_2$ until the extent of bribe-taking goes back to an acceptable level.

The four hardest problems in State building


The stakes are sky high in four areas:

  • The criminal justice system,
  • The judiciary,
  • Tax collection, and
  • Infrastructure + financial regulation.

In these four areas, the personal gains that staffers in government can get, in return for sacrificing the objectives of the organisation, are thousands of times larger than their wage income. They all involve a large number of transactions, and there is inescapable discretion in the hands of front-line officials. Here, creating the public administration machinery to make civil servants behave correctly is the hardest. These four areas are the most challenging problems in State building.

This has two implications:

  1. Particularly in these areas, we should learn to walk before we can run. At first, policy pathways should involve low load. In order to do this, we should push towards low transaction intensity, low discretion and low stakes.
  2. These four problems should take up the highest priority as the big hairy audacious goals of State building. The top management has to prioritise time and resources for these big four problems.

Example: Punishments in the criminal justice system


Every now and then, we have outrage in India about crime. After a great deal of hand wringing, the outcome too often is: Let's increase the punishment.

The criminal justice system (laws, police, prosecution, prisons, courts) is one of the hardest problems in public administration. The policeman and the public prosecutor are able to talk with the accused and threaten that if the law is enforced, a certain punishment will flow, and ask for a bribe in exchange for not enforcing the case. The bigger the punishment, the bigger the divergence between personal benefits and the objectives of the organisation.

Ratcheting up punishments in response to failures of enforcement is thus precisely wrong. The criminal justice system is failing when given low load (e.g. 2 years imprisonment for rape). If the load is increased (e.g. 4 years imprisonment for rape) then this places a higher load upon a broken system. This will result in an inferior criminal justice system.

First we have to make the criminal justice system work with low punishments. There is a lot to be done, in building the criminal justice system, with reforms of laws, lawyers, police, public prosecutors and prisons. We should keep punishments low, and make this work in terms of processes, delays, arbitrariness, etc.

Only after that can we consider the tradeoffs in higher punishments. This consideration trumps all others. You may have a strong moral belief that a certain crime deserves a certain punishment. You may be able to demonstrate that the minimum level of punishment required to achieve deterrence is quite high. You may see mainstream practices in developed countries and think that gives a ballpark estimate for what a certain punishment should be. All these considerations are irrelevant. The maximal punishment that should be used is the one that we are able to pull off, in terms of the load bearing capacity of the criminal justice system. Only after we have established a high load bearing capacity can we bring other considerations into play, and potentially ratchet up to larger punishments.

There are other good arguments in favour of low punishments. One is Occam's razor of public policy: we should desire the lowest punishment which gets the job of deterrence done. James Scott has a meta-principle: Prefer to do things that you can undo if you discover you were wrong. He has a footnote on this saying this is a good reason to not have a death penalty. A certain fraction of people that we convict are always Type 1 errors (innocent but convicted); the harm that we impose is lower when punishments are lower.

Example: Design of the Goods and Services Tax in India


Satya Poddar and I have an article on `walk before you can run' applied to the design of the Indian GST.

Example: Armed forces


An article from October 2016.

Example: bankruptcy reform


An article from June 2017. A tangible argument.

Conclusion


Most management is about principal-agent problems. Most public administration is about the principal-agent problem between citizen and State employees. To be a public policy thinker, we have to ask three questions. What's the market failure? What's the minimal intervention that can address this market failure? Do we have the ability to build this intervention, in the light of public choice theory? The third test is a big barrier in India. Many things that sound reasonable and are done by many countries are not feasible in India. We in India have yet to learn about establishing accountability mechanisms through which we obtain high performance agencies. We are at the early stages of this journey.

As Kaushik Basu says, there is libertarianism of choice and there is libertarianism of necessity. All too often, in India, the right answer is to do less in government, out of respect for limited State capacity. I think of `do less' as two dimensions.

The first is to not do certain things at all e.g. it's impossible for India to build unemployment insurance. This will permit scarce resources (money, management capacity) to be focused on a few core issues. As an example, Jeff Hammer emphasises that in the field of health, we should use our limited State capacity to emphasise public health.

The second is to get started with low loads. In an environment of pervasive implementation shortfall, we should first score victory with policy pathways that place low load upon public agencies. We should ask for high performance organisations which are able to deliver results when given easy problems : low transaction intensity, low discretion and low stakes. Only after we know how to walk should we consider running.

There is very limited capacity in terms of the top management which will build and run these systems. We have to focus on the few highest priorities that are worth pursuing. The big four are: criminal justice system, judiciary, tax collection and infrastructure + financial regulation. They should be the top priority in State building, and have the highest claim on scarce top management time and resources. In each area of these four areas, our strategy should be:

  1. Build an overall strategy;
  2. Create independent measurement so as to track the performance of the system e.g. survey-based measurement of the incidence of corruption in tax administration; 
  3. Rescale the objectives of policy to minimise the load : i.e. favour pathways that involve low number of transactions, low discretion for officials and low stakes.
  4. Design for load-bearing capacity. This is about adequate sizing for the required number of transactions, setting up processes which reduce discretion, and creating checks and balances to get accountability. This involves many design elements, as has been done for macro/finance by FSLRC: clarity of objective, minimising conflicts of interest, limited powers and discretion, precision of rules, procedural and transactional transparency, accountability mechanisms including the rule of law.
  5. Achieve demonstrable success on low load problems;
  6. Consider increased load only after success has been achieved at low load.

How does this link up to the debate about small steps versus grand schemes, the tension between  incrementalism and transformative change? We should do transformative change like the GST, but we should start at a low load GST -- low rate, flat rate, comprehensive base. We should first learn how to build the load-bearing capacity for this low-load GST, before contemplating a higher rate or multiple rates or a Balkanised base.

Acknowledgements


I am grateful to John Echeverri-Gent, Lant Pritchett and Vijay Kelkar for stimulating conversations on these issues.

References


Capability traps? The mechanism of persistent implementation failure. Lant Pritchett, Michael Woolcock, Matt Andrews. Working Paper, 2010.
Has a section on premature load bearing.

Solutions when the solution is the problem: Arraying the disarray in development. Lant Pritchett, Michael Woolcock. World Development, 2003.
Introduces the $2 \times 2$ scheme for classifying problems as hard when there is discretion and transaction intensity.

Improving governance using large IT systems. Ajay Shah. In S. Narayan, editor, Documenting reforms: Case studies from India, Macmillan India, 2006.
IT systems can be used to remove discretion and thus make some problems easier.

Tuesday, August 30, 2016

The measurement of Indian manufacturing GDP: problems and some solutions

by Amey Sapre and Pramod Sinha.

Since the release of the 2011-12 series, the reliability of Indian GDP data has been the subject of intense debate. In the case of manufacturing GDP, there were large upward revisions in growth rates from 1.1% to 6.2% in 2012-13 and –0.7% to 5.29% in 2013-14, which were inconsistent with trusted private databases about growth in manufacturing. The introduction of the new MCA-21 dataset has also raised questions, as the lack of release has made it impossible for independent researchers to cross-check the estimates.

GDP estimation is a remarkably complex process. It is built on several sub-processes, datasets, and methodologies at the sub-sector level. At every base year revision, we see changes in sources and methods of computation that aim to yield improved measurement of macro aggregates. Valuable insights that can be derived, through the study of measurement issues, for our interpretation of the resulting data and thus our reading of macroeconomic conditions.

In a recent paper, we address three questions about Indian manufacturing GDP estimation:

  1. Are we correctly measuring output and intermediate consumption in the formula for Gross Value Added (GVA)?
  2. How sound is the technique of imputing missing data based on blowing-up using Paid Up Capital (PUC)?
  3. When the new MCA-21 dataset is used, are manufacturing firms being correctly identified?

    Questions about measuring output and intermediate consumption in the formula for Gross Value Added (GVA)


    There are many concerns about how GVA has been estimated using firm data. In the paper, we recreate the process of GVA estimation. We use the Goldar Committee report in letter and spirit, and use the production side approach to recreate the GVA for a set of firms that file in MCA-21. For this, we take the XBRL formatted data from MCA-21 and identify the data fields used to compute GVA.

    We also do a mapping of the XBRL fields with fields in CMIE Prowess and estimate the GVA. A detailed mapping can be found here. These two strategies give us a unique vantage point from which to evaluate discrepancies in GVA estimation.

    Conceptually, the use of the MCA-21 dataset involves a shift from the erstwhile Establishment to the new Enterprise approach of value addition. The establishment approach captured production based data from factories registered under the Factories Act. The enterprise approach captures financial data of firms, and goes beyond just manufacturing by capturing value addition from post-manufacturing, ancillary or related activities such as marketing, and operations of branch/head offices. How does this impact upon value addition? There are two parts to this answer.

    The first is the extent to which measures of output change.

    Under the establishment approach, “Sales” was a measure of output. In the current enterprise approach formula, several disaggregated components of revenue that include revenues from products, services, operating revenues, revenue from financial services, rental income, incomes from brokerage and commission and other non-operating incomes are part of output. In the Goldar Committee report, there is a limited discussion on the inclusion or exclusion of several revenue fields in GVA computation. Also, the data labels and tags of the XBRL fields are broadly based on items in Schedule-III of the Companies Act. The lack of proper definitions of the fields makes the identification process cumbersome and prone to errors. It is evident from the output composition that value addition is not solely accruing from manufacturing activities, but also from several related activities. This leads to inflated GVA levels as the component of output is now similar to the total income of the company, and not industrial sales.

    In the paper, we show a comparison with the previous sales based method and argue that changes in output composition alone can lead to increased levels of GVA. This will eventually push the growth rates upwards.

    Year Based on
    Sales
    Based on Disagg
    -regated revenue
    Difference
    2011-12 701896.6 767311.4 65414.8
    2012-13 742237.2 819228.5 76991.3
    2013-14 780371.1 872178.1 91807.0
    Comparison of GVA based on old and new method (Figures in Rs. Crore)

    We study the firms in the CMIE Prowess. Using the traditional sales-based measure, our manufacturing GVA estimate is Rs.780,371.1 crore for 2013-14. Using the disaggregated revenue, it appears that there is an over-estimation of manufacturing GVA of Rs.91,807 crore, by including revenue from non-manufacturing activities.

    What is missing in the GVA formula is a clear rationale of including revenues from different non-manufacturing activities. If such activities form a part of the enterprise level activities, it also requires a clear segregation of costs, identifiable data fields, and a consistent treatment in the formula to identify value addition from core manufacturing and other activities.

    The second issue is changes in measures of intermediate consumption.

    Identifying components of intermediate consumption at the enterprise level is equally difficult. Conventionally, subtracting the cost items (related to production) from output provides a measure of value addition entirely from manufacturing activities. However, with large and diversified enterprises, identifying cost items from financial data fields can pose significant challenges. A close scrutiny of the XBRL fields shows omission of important cost components, such as; Power & Fuel expenses, Advertisement and marketing related expenses. These are sizeable components and their omission can underestimate costs, thereby overestimating GVA. Thus, two possible reasons that account for distortion in GVA are; increase in output due to addition of several revenue items, and omissions in the components of costs.

    Questions on the blow-up methodology


    Missing data imputation is done, in Indian GDP estimation, by assuming that GVA is proportional to Paid Up Capital (PUC). In the paper, we replicate the blow-up process by constructing an available and active set of companies based on random samples that give different Paid-up Capital coverage. The details of the procedure have not been clearly documented in official publications. Several variants of the method are possible, such as; blow-up for each range of Paid-up Capital, blow-up by industry group, by ownership type of company, among others.

    PUC-based blowup assumes that PUC and GVA have a deterministic and linear relation. This is at best a weak assumption, as one cannot draw sufficient inference about a company’s manufacturing activities by looking at its Paid-up Capital value. In the paper, we show that the size distribution of PUC and GVA have no systematic relation, and thus PUC is not an appropriate method to scale up GVA. Since the GVA contribution of a firm can be negative, the PUC based blow-up shows a distorted picture as it always contributes positively.

    Our analysis of the blow-up procedure reveals several shortcomings. First, the blow-up factor is sensitive to Paid-up Capital (PUC) coverage and can show a considerable increase as the number of non-reporting companies increases. Second, the variation in blown-up values is unpredictable as there is no systematic trend for different values of the PUC factor. This leads to an unknown degree of error as the addition due to blow-up can be significantly large as compared to the actual contribution of unavailable companies.

    On this problem, we are also able to offer a solution. In the paper, we show that using industry level growth rates of GVA to scale up previous year’s GVA of unavailable companies is a feasible and superior method. We use a sample to first classify each missing company into its industry and based on growth rates of GVA for each industry, we scale up the last available GVA of the unavailable company. Using industry level growth rates of GVA has an advantage over the PUC based blow-up as it uses the previous year’s GVA of the company instead of scaling up GVA of available companies. Industry growth rates capture the economic conditions faced by firms in and also provide a sufficient clue about the state of business environment. Computationally, on average, the method gives a lower margin of error, lesser variability, better representation of firm’s conditions and provides a close approximation to the actual GVA contribution of the firm. CSO can potentially shift to using this method.

    Are manufacturing companies being correctly identified?


    The Goldar Committee report makes a mention of using the ITC-HS product codes for identification of manufacturing companies. In absence of such codes, the Company Identification Number (CIN), which contains the NIC code, can be used to identify the nature of business activity of the company. The problems with using these two options are known. What is unknown is the extent of misclassification of companies and the error in the GVA estimate.

    The reliance on ITC-HS code has several problems. Only 59% of the 30,006 companies filing in XBRL across all industries had reported the ITC-HS for products and NPCSS for services. However, even having the codes does not solve the problem. The codes only identify a product and do not distinguish between its trading and manufacturing. Thus, using such codes does not provide an assurance that value addition is being correctly captured for manufacturing products.

    The problem is compounded in cases where the codes are unavailable. At present, a company’s CIN and the details on its website are used to identify its business activity. This yields misclassification.  For a company, its 21 digit CIN does not change once it has been created at the time of registration. Over time, a company may change the nature of its business activity or diversify into any other sector. This change of business activity is not reflected in the CIN code of the company. Using CIN can be potentially misleading since the top revenue generating activity of the company might be different from the one mentioned in its CIN code.

    The NIC classification also changes from time to time. This adds to the complexity of identification in two ways; first, changes in business activities of companies are independent of changes in NIC codes, and second, a particular NIC code may not reflect the same business activity over time.

    In the paper, we analyse this problem by studying two groups, (i) companies that operate as non-manufacturing entities, but have their NIC codes registered in a manufacturing activity and (ii) companies that are into manufacturing, but have their NIC code registered in any other economic activity. We show that there are a large number of companies in both categories and can create a significant distortion in the GVA estimate. We argue that any classification method for companies based on either ITC-HS or CIN code or hand mapping based on clues gathered from the name of the companies  or details from their website is likely to be incorrect. This process requires careful hand-analysis of each firm, to classify the firm correctly.

    Conclusion


    Sound computation of GDP is essential to decision making by the government and by the private sector. With imperfect observation of GDP, on many questions, we are flying blind. There has been a great amount of criticism of the Indian GDP data in recent years, as the high growth rates seen in the official data are inconsistent with trusted private databases. Our paper contributes three new blocks of knowledge to one component of the problem, i.e. measurement of manufacturing GDP.


    Amey Sapre is an Economics Ph.D. student at IIT, Kanpur and Pramod Sinha is a researcher at NIPFP.

    Monday, August 29, 2016

    Are fleeting orders by high frequency traders a source of market abuse?

    by Nidhi Aggarwal and Chirag Anand.

    SEBI recently released its discussion paper on algorithmic trading. The paper proposes several measures to address various concerns that have been expressed about the rise of new technology in the field of financial markets. One of the candidate interventions that SEBI has proposed is the imposition of 'minimum resting time for orders'. SEBI proposes imposing a resting time of 500 milliseconds (ms) during which an order will not be allowed to be amended or cancelled. In this article, we bring evidence to bear on this one candidate intervention.

    The rationale for the proposed measure is to curb 'fleeting orders' or orders that appear and disappear within a very short period of time. SEBI's proposed regulatory intervention, that there should be a minimum resting time of 500 ms, may suggest that orders modified/cancelled in less than 500 ms are considered by SEBI to be fleeting orders, though the discussion paper does not say this explicitly.

    A central objective of the regulation of financial markets is to block market abuse. How can fleeting orders be connected with market abuse? Orders without a clear intent to trade may falsify perceived liquidity and price in the marketplace. Through this, placing fleeting orders could be a tool for misleading other traders. In the market abuse literature, there is a concept known as "order spoofing". This involves placing a visible order in the opposite direction of the trade that is genuinely desired. For example, a seller might post a small buy order priced above the current bid, in the hope of convincing other buyers to match or outbid this. If that occurs, the trader can then sell into this (higher) price.

    Fleeting orders can also contribute to "quote stuffing", which can affect the ability of other traders to send their orders to the exchange by essentially flooding the systems. This is tantamount to the strategem in the field of computer security that is called `denial of service attacks'.

    There are, however,  legitimate and  important reasons for rational persons to place fleeting orders. A trader may cancel and resubmit a limit order when the market moves away from the original limit price. This will especially occur during volatile times when information arrival in the market is high. Many trading strategies look at the touch -- at the bid and the ask price -- and not just at the last traded price.

    A perfectly legitimate trading strategy runs as follows:

    Watch the bid and the ask price, continuously compute (bid+ask)/2 which is the reference price, always have a limit order to sell at 0.1% above this reference price. 

    A person may use such an algorithm to sell a large block of shares while hoping to get a good execution price. This algorithm would dance continuously, refreshing the limit order every time (bid+ask)/2 changes, which is much more often than a change in the last traded price. That is, this trading strategy would undertake more revisions per unit time when compared with the number of trades per unit time.

    Every now and then, a trader might just switch a limit order to a market order to get immediate execution (Hasbrouck and Saar, 2009). This would look like a fleeting order as the trader changed his mind and scrapped a limit order after a very short time.

    Before designing an intervention, SEBI needs to examine the data to look at the fraction of and nature of fleeting orders in the market. The discussion paper has no evidence about the existence of fleeting orders, or evidence that there are problems in what is going on at Indian exchanges. Without a clear demonstration that the issue exists, the coercive power of the State should not be used. If we go down the path of using State coercion without the foundations of hard evidence, then there is a high chance that State power will merely reflect competing political pressures where various factions try to use State power as a tool for furthering their business objectives.

    In this article, we analyse the questions surrounding fleeting orders using data for orders and trades from the National Stock Exchange of India.

    Data description


    The database and computational challenges of such work are immense. Hence, we use two months of data for the analysis.

    Ideally, this work should have been done using data for June and July 2016, but our computational infrastructure broke down in January 2014, and we were forced to make do with the most recent available complete months, which were November and December 2013. The intensity of algorithmic trading in November and December 2013 is the same as that which has prevailed in the following months. Hence, we are on sound grounds when we analyse that data.

    There were 6.5 billion records of data in these two months, which were studied for the purpose of this article.

    Order cancellations


    A large fraction of orders on NSE are cancelled. In an analysis that we did in July 2015, where we studied the same months of November and December 2013, we found that 56.97% of new orders that entered the spot market, 94.11% of the orders on the single stock futures (SSF) market, 88.55% of orders on the  single stock options (SSO) market, 82.58% of the orders on the Nifty futures market, and 87.51% of the orders on the Nifty Options market were cancelled.

    Order cancellation is clearly a valuable tool for most traders on electronic markets. This is seen internationally also. For example, Hasbrouck and Saar (2002) find that 93% of limit orders are cancelled on INET. This is true for other exchanges including NYSE, the Australian Securities Exchange and so on.

    In a deep sense, algorithmic trading is merely trading by other means.  Using data from NASDAQ, Subrahmanyam and Zheng (2015) document that cancellation ratios of high frequency traders are similar to that of the non-high frequency traders.

    A large percentage of cancellations does not imply the existence of fleeting orders. A fleeting order involves placing a limit order inside the touch (i.e. between the bid and the ask) and then quickly cancelling it (Fong and Liu, 2010). There are three steps in identifying fleeting orders: We must count (a) Cancelled orders, (b) Which were cancelled quickly and (c) Which were near the touch. We will now do these calculations for the NSE spot and SSF markets.

    Duration of cancelled orders


    We analyse the securities which were traded on the derivatives market in 2013. These were the top 150 firms. We group these securities by market capitalisation. The securities with the highest market capitalisation are in Q1, and the securities with the lowest market capitalisation are in Q4. For each quartile, we measure the fraction of orders which were cancelled. We go on to measure the fraction of these order cancellations which took place in under 1 second. This is a conservative value when compared with SEBI's proposal of 0.5 seconds. If SEBI's proposed threshold of 0.5s were used, the fraction of orders seen would be lower.

    All values as % of total unique orders entered
    Panel A Orders cancelled
    Market Cap Quartiles Spot SSF
    Q1 (Highest)        67.23    94.06   
    Q2       58.83    91.15   
    Q3       51.58    90.62   
    Q4 (Lowest)       41.19    85.12   
    Panel B Orders cancelled in less than 1 second
    Q1 (Highest)       36.84    70.06   
    Q2       28.11    61.05   
    Q3       22.23    58.06   
    Q4 (Lowest)       12.60    45.23   
    Table 1: Order cancellations on Spot and SSF market in 2013

    Panel A of Table 1 shows the share of cancelled orders in all unique orders, while Panel B shows the share of orders cancelled within one second of arrival in all unique orders. We see that in comparison to the SSF market, the spot market experiences a lower percentage of orders cancellations within one second of arrival. In addition, we see that the percentage of order cancellations within one second is higher for large market capitalisation stocks. This is consistent with the fact that the biggest firms are the subject of the most intensive scrutiny by the financial markets.

    The biggest value in Panel B is 70.06%: A full 70.06% of the SSF orders for top quartile stocks are cancelled within 1s. The smallest value is 12.60%: Just 12.60% of the spot market orders for bottom quartile stocks are cancelled within 1s. We should not that this is the bottom quartile within the top 150 stocks on NSE, i.e. it is the stocks from rank 113 to 150.

    We now turn to measuring the extent to which these fast cancelled orders could be termed fleeting orders. We only focus on the orders cancelled within one second of their arrival.

    Position of 'fast' cancelled orders before exit


    The table below shows the position, in the limit order book, of fast cancelled orders before they were cancelled. 'At best' indicates that the order was at the best prices in the book, (1,3] indicates that the order was placed at depth two or three in the order book, (3,5] indicates that the order was placed at depth four or five in the order book, and (>5) indicates that the order was placed beyond the top five prices in the order book.

    All values as % of total unique orders entered
       'Fast' cancelled orders: Orders cancelled in less than 1 second   
    Market Cap Quartiles At best (1, 3] (3, 5] (>5] Sum
    Panel A: Spot
    Q1 (Highest) 2.47 5.46 5.59 23.31 36.83
    Q2 5.22 7.25 5.09 10.55 28.11
    Q3 7.12 6.54 3.07 5.50 22.23
    Q4 (Lowest) 5.14 3.82 1.56 2.08 12.60
    Panel B: SSF
    Q1 (Highest) 3.66 8.19 9.83 48.37 70.05
    Q2 6.18 11.18 10.89 32.82 61.07
    Q3 4.96 10.75 12.09 30.27 58.07
    Q4 (Lowest) 5.70 13.04 11.30 15.19 45.23
    Table 2: Position of fast cancelled orders in the order book in 2013

    The value 2.47 in the first row of the table indicates that for the stocks with highest market capitalisation, 2.47% of orders were at the best prices, i.e. at the touch, and were rapidly cancelled. Similarly, the value 5.59 in the first row shows that for the highest market capitalisation stocks, 5.59% of the orders were at the best fourth or the fifth price level in the order book, and were rapidly cancelled. The last column adds up all the previous columns, and matches up with the share of orders cancelled within one second, which is Panel B of Table 1.

    This table offers fascinating evidence about high frequency trading in India:

    1. The incidence of fleeting orders is very small: The biggest value seen is for Q3 stocks on the spot market, where 7.12% of orders were at the best prices and were cancelled within one second of their arrival.
    2. An overwhelming majority of fast order cancellations occur away from the best prices. As an example, in the 1st row, fast order cancellations were 36.83% of orders, of which 2.47 percentage points were at the touch.
    3. Stocks with the highest market capitalisation, where algorithmic trading is the most intense, experience a low incidence of fleeting orders as a share in total orders.

    We cannot examine the intent behind these cancellations since it requires the knowledge of trader-identities for further analysis, which we do not have in our data.

    Implications


    Good regulation making requires data analysis and scientific evidence. The legislative function of regulators (i.e. the drafting of regulations) is primarily a research function: it requires deeply understanding the world, identifying market failures, and identifying parsimonious instruments of intervention that go to the root cause of the market failure. Globally, regulators such as the SEC and ASIC have deployed empirical research to determine the need for an intervention. The FSLRC handbook requires that regulators must do cost-benefit analysis before issuing any new regulation, where such research would be an early first stage of the regulation-making work. These capabilities are required in regulators in India if we are to build high performance organisations.

    Our analysis above has many important implications for the policy analysis of the proposed minimum resting time of 500 ms:

    1. We used a more conservative measure -- order cancellation within 1 s. We find little evidence of fleeting orders in India.
    2. We have undertaken the first stage of the research -- counting fleeting orders. If the coercive power of the State were to be used in proscribing fleeting orders, SEBI needs to show evidence that this small proportion of fleeting orders is adversely affecting market quality.
    3. A regulation that interferes with all orders in order to influence the tiny proportion of fleeting orders is placing a burden upon society at large because it wishes to block a rare event. We should think more about the tradeoffs between prevention and enforcement. Perhaps it would be better for SEBI to build knowledge about how to enforce against market abuse in the HF environment, instead of imposing the costs of prevention upon society. SEBI's proposal raises concerns about the possibility of faulty tradeoffs in security. There is a public choice theory problem here: It is a stroke of the pen for SEBI to impose restrictions upon citizens, while it is hard work for SEBI to build State capacity in enforcement.

    SEBI's discussion paper proposes seven interventions:

    1. Minimum resting time for orders
    2. Frequent batch auctions
    3. Random speed bumps of delays in order processing/matching
    4. Randomisation of orders received during a period (say 1-2 seconds)
    5. Maximum order message-to-trade ratio requirement
    6. Separate queues for colo orders and non-colo orders (2 queues) 
    7. Restrict access to tick-by-tick data feed.

    This article deals with the first: minimum resting time. As emphasised above, our work is limited: We have only counted fleeting orders, we have not gone into the question of demonstrating that fleeting orders have an adverse impact upon market quality. This kind of research is required on all the other six proposed interventions before policy decisions can be taken. This suggests the scale of research capabilities which are required before wielding the coercive power of the State in the legislative wing of a financial regulator.

    References


    The causal impact of algorithmic trading on market quality by Aggarwal N, Thomas S, 2014, IGIDR Working Paper.

    The changing landscape of equity markets by Aggarwal N, Anand C, 10 July 2015, Ajay Shah's blog.

    Limit Orders and Volatility in a Hybrid Market: The Island ECN by Hasbrouck J and Saar G, 2002, Working Paper, New York University.

    Technology and liquidity provision: The blurring of traditional definitions by Hasbrouck J, Saar G, 2009. Journal of Financial Markets, Volume 12, Issue 2, May 2009, p. 143-172.

    Limit order revisions by Fong K and Liu W, 2010, Journal of Banking and Finance, Volume 34, Issue 8, August 2010, p. 1873-1885.

    Limit Order Placement by High-Frequency Traders by Subrahmanyam A and Zheng H, Working Paper, 2016.

    Author: Chirag Anand

    Chirag Anand is a researcher in the field of technology and policy.

    Sunday, August 28, 2016

    Incrementalism versus transformative change

    by Ajay Shah.

    At a show at Niti Aayog recently, Narendra Modi pleaded for big bang reforms. He said:
    "No country can afford any longer to develop in isolation. Every country has to benchmark its activities to global standards, or else fall behind".
    "The younger generation in our own country is thinking and aspiring so differently, that government can no longer afford to remain rooted in the past".
    "If India is to meet the challenge of change, mere incremental progress is not enough. A metamorphosis is needed."
    "My vision for India is rapid transformation, not gradual evolution".
    "The transformation of India cannot happen without a transformation of governance".
    "We have to change laws, eliminate unnecessary procedures, speed up processes and adopt technology. We cannot march through the twenty first century with the administrative systems of the nineteenth century."
    In that same meeting, the Singapore deputy prime minister Tharman Shanmugaratnam said:
    Reforms agenda is largely unfinished and pace of reforms have to be stepped up. You are on a good batting wicket but you can’t keep on scoring singles.
    On the same day, Raghuram Rajan said:
    "Observers may be impatient, but my belief is that steady and irreversible reform and mini bangs like yesterday’s (Thursday) rather than big bang is the need of the hour".
    How should we think about these questions? I feel there are eight useful ideas in thinking about transformative change versus incremental change.

    1. We are blessed when incrementalism will suffice. It is nice to be in a place where incremental reforms can get to a good destination. Incremental reforms are much easier and safer. E.g. it's possible to get to a pretty good NPS through incremental work at PFRDA.
    2. Incrementalism does not always suffice. Many times, we do not have the luxury of doing incremental reforms, as there is no incremental way to get to a good destination. No amount of incremental reform could have brought about the GST. There is no way out, but to bite the bullet and do the big difficult change, which is the GST. Similarly, there was no way out but to do the big NPS reform which the NDA did in December 2002, which was transformative change and not incremental change. These were big changes which had to be done. Singles vs. fours vs. sixes seem to be slower or faster ways to chase the same target, but the problem in public policy is fundamentally different. We have to obtain qualitative change, many times.
    3. Planning is required for government but not for society. Social engineering, as in central planning for the lives of citizens, seems pretty abhorrent. We should not do central planning about how citizens and the private sector will pursue life, liberty and happiness. E.g. financial regulators should not specify details of products, processes and players: this is high-modernist fantasy which all too often gets us to grief. But when it comes to government, there is a need to plan the organisation, processes and checks and balances. Like any large organisation, there is a need to plan the contours of government, envision the organisation diagram, objectives, powers, processes, accountability, etc. There will be discrete jumps in going from one organisation structure to another, which will often constitute transformative change and not incremental change.
    4. Make pawn moves within a full strategy. It is important to have a full picture, an internally coherent plan, and then build towards this incrementally. We must have a conceptual framework within which we push pawn moves.
    5. Those who only see the pawn move will often make the wrong pawn moves. Incremental change is only useful when it fits in the larger picture. Else, it's largely a waste of time. As an example, amending the RBI Act to add in inflation targeting and the MPC was part of a full picture of macro/finance reforms. The recent batch of corporate bond market reform, which is extolled as a mini-bang, will not matter.
    6. A big barrier to big bang reforms in India is State capacity. As an example, the staff quality in the GST project is the binding constraint. To be able to get deeper change, we need to grow capacity of the kind required for deeper change, and we should prioritise areas for work based on the availability of technical skills.
    7. What does it take, to achieve deeper change? It takes a lifetime of work in India to understand what transformative change is required, and to be part of the policy process towards this kind of work. People who haven't put in this kind of intensity see a hill they can't climb.
    8. The cop out. Transformative change is harder. It is often easier to duck, by claiming credit for minor things, or claiming that major things were infeasible. However, in India, the fact is that at numerous points in history, we have achieved big transformative change. There are people here in India who know how it's done.

    Wednesday, August 24, 2016

    Marginal cost of public funds: a valuable tool for thinking about taxation and expenditure in India

    by Ajay Shah.

    In an ideal world, taxation would be done in a frictionless way. The ideal world is a nice place where there are no transactions costs either for taxpayers in compliance, or for the tax authorities in collection. There would be no illegality and criminality surrounding the tax system. Most important, the presence of taxation would not modify the resource allocation in the slightest.

    `Resource allocation' is economist-speak for the magnitudes of labour and capital, and the technology through which they are used. `Technology' is economist-speak for both the science and technology, and the business methods through which resources are utilised. In the ideal world, firms would produce based on pure efficiency considerations. Nothing about the questions `What to produce?' and `How to produce?' would be modified in the slightest by the tax system.

    The government would collect taxes in this ideal world without imposing any excessive burden upon society. In other words, the cost to society of Rs.1 of spending by the government would be only Rs.1.

    This notion is formalised as the `Marginal Cost of Public Funds' (MCPF). This answers the question: When the government spends Rs.1, what cost does it impose upon society? As with most economics, this question is posed `at the margin', i.e. what's the cost to society of the last Rs.1 that the government spent? In the ideal world, the MCPF is 1, but in the real world, it's always worse (i.e. bigger than 1).

    The aspiration to get an MCPF of 1 was precisely expressed by Pranab Mukherjee in his July 2009 budget speech where, in para 31, he says:

    I hope the Finance Minister can credibly say that our tax collectors are like honey bees collecting nectar from the flowers without disturbing them, but spreading their pollen so that all flowers can thrive and bear fruit.

    This happy destination is one where the MCPF is 1, i.e. where the cost to society of Rs.1 of tax collection is 1.

    Why do we get MCPF $> 1$?


    Why is it that in the real world, we always have an MCPF that exceeds 1? There are costs of compliance, costs of administration, corruption, illegality, criminality. When all these costs are encountered, the cost to society of Rs.1 of marginal tax revenue exceeds 1.

    Most important is the issue of a modified resource allocation. People respond to incentives. If income is taxed, people work less. If apples are taxed, people eat more oranges. This results in a distorted resource allocation, which results in lower welfare i.e. lower GDP. When the act of taxation distorts the resource allocation, and thus reduces GDP, the cost to society of the last Rs.1 of taxation exceeds 1.

    There are seven sources of MCPF$>1$ in India:

    1. Income tax distorts the work-leisure tradeoff and the savings-consumption tradeoff.
    2. Commodity taxation distorts production and consumption, particularly when there are cascading taxes.
    3. We in India have a menagerie of `bad taxes' including taxation of inter-state commerce, cesses, transaction taxes such as stamp duties or the securities transaction tax, customs duties, taxation of the financial activities of non-residents. From 1991 to 2004, we thought the tax system was being reformed to get rid of these, but from 2004 onwards, things have become steadily worse, starting with the education cess and the securities transaction tax. All these are termed `bad taxes' in the field of public finance because when money is raised in these ways, the MCPF $\gg 1$.
    4. India relies heavily on the corporate tax, and has double taxation of the corporate form. In the last decade, corporate income tax and the dividend distribution tax added up to 35% of total tax collection. The double taxation induces firms to organise themselves as partnerships and proprietorships.
    5. There is the compliance cost by taxpayers and tax collectors, which is a pure deadweight cost. At the extreme, these include the costs imposed upon society by illegality and criminality owing to corruption in the tax system. When some firms get away with tax evasion, this changes the incentives of ethical firms to invest, which imposes enormous costs upon society as the most ethical firms are often the highest productivity firms.
    6. There are the consequences for GDP of the political economy of lobbying for tax changes, which arise when we do not have simple single-rate tax systems. E.g. if there was only one customs duty (e.g. 5%), this is much better than having different rates. Similarly, 80% of the countries which introduced the GST after 1995 have opted for a single rate GST.
    7. At the margin, public spending is actually financed out of deficits which are deferred taxation, intermediated through the processes of public debt management. Hence, in thinking about the MCPF, we must think about deficits and their financing also. Additional deadweight cost appears here, as we do financial repression (some financial firms are forced to buy government bonds). This is akin to a narrow commodity tax and is a bad tax.

    All good thinking in tax policy and tax administration impinges upon the MCPF. If we setup a flawless GST, the MCPF will go down. If we reform tax administration, the MCPF will go down. The distortion associated with a tax goes up in proportion to the rate squared: hence the MCPF will be lower at a GST rate of 12% rather than at 24%.

    How big is the MCPF in India?


    As the discussion above suggests, there is the assessment of MCPF at the level of society as a whole and there is its measurement at the level of one tax at a time where the `bad taxes' will leap out of the page.

    Estimation of MCPF is hard. Computable general equilibrium models are useful for thinking about shifting from commodity specific taxes to a single rate VAT. But most of the elements above are beyond the analytical reach of empirical economics.

    One uniquely Indian problem is that on an international scale, most of these problems have been abolished. Most mature economies do not do financial repression, have low corruption in tax administration, do not have political economy of lobbying for tweaking of tax rates, do not have any of the bad taxes (taxation of inter-state commerce, cesses, transaction taxes, customs duties, taxation of financial activities of non-residents). Most mature economies have moved to a commodity neutral VAT or sales tax. As with most parts of the Indian macro/finance environment, India is an outlier with extremely poor institutional mechanisms in taxation. Nobody does the things that we do, and hence there is no international literature which helps us measure the MCPF in India. All we can know is that the Indian MCPF is very large.

    Let's look at the international literature. Many papers find values from 1.25 to 2 in OECD countries. For an example of values from advanced economy, Dahlby and Ferede, 2011, find that the Canadian income tax has a marginal cost of public funds of 1.71, their personal income tax yields a value of 1.17 and their general sales tax has a value of 1.11. Feldstein, 1999, estimates a value of 2.65 for the US. Ahmad and Stern, 1984, estimate that the marginal cost of public funds in India for excise is between 1.66 and 2.15; for sales tax it is between 1.59 and 2.12; for import duties it is between 1.54 and 2.17.

    When compared with conditions in India, the values seen in these existing papers are small, as the full blown distortions of the Indian tax system are not found in those countries. The one paper on India (Ahmad and Stern, 1984) only addresses a small part of the distortions associated with the Indian tax system. Indian policy thinkers who read Feldstein, 1999, which estimates a value of 2.65 in the US, would be delighted to achieve the conditions described in that paper.

    Putting these considerations together, Vijay Kelkar, Arbind Modi and I believe that the true MCPF in India may exceed 3.

    Further research on this question is very important. However, we are not able to visualise a research strategy that could put all the seven sources of distortion into one estimate, using today's knowledge of public economics. We are forced to form a guesstimate, and we propose the value of 3.

    Implications


    A central objective for tax reform should be to modify tax policy and tax administration so that the MCPF comes down. A central consideration in expenditure policy should be to narrow expenditure down to the few things where we can be convinced that the marginal gains for society exceed the MCPF, i.e. the last rupee of spending gives benefits to society exceeding the hurdle rate of Rs.3.

    Spending on private goods. The value to society of gifting me Rs.1 to buy private goods that I like is Rs.1. Subsidy / transfer programs do not meet the test.

    Leakages in private goods. If government intends for person $x$ to be the recipient of a private good of Rs.1, but owing to inefficiencies and corruption, Rs.0.5 reaches person $x$ while Rs.0.5 reaches an unintended beneficiary such as an official or a politician, this still yields gains for society of Rs.1, except that the gains are being allocated differently from what was intended. This does not change the fact that the aggregate gains to society was Rs.1, which is below the threshold of 3.

    Spending on public goods. Spending on many public goods does yield gains that exceed the hurdle rate. We spend Rs.400 crore a year on running SEBI which produces the public good of financial markets regulation. SEBI does this poorly, and there are many ways in which SEBI can better utilise this money. But there is little doubt that the gains to society exceed Rs.1200 crore. If you imagined a world without SEBI, Indian GDP would drop by more than Rs.1200 crore.

    Similarly, once we build a government agency to control air pollution, this will yield gains to society (in terms of the reduced burden of respiratory illness) vastly greater than the direct expenditures for running the agency. The same can't be said about public spending that makes the private goods of health care for people with respiratory ailments. In anticipation of the October 2016 epidemic of Dengue, let's fight mosquitoes. The gains to society from vector control easily exceed the hurdle rate, while the services of hospital beds and crematoriums are private goods.

    Leakages in public goods. With public goods programs, we will sometimes get the situation where inefficiencies in the expenditure profile damage the marginal product to below Rs.3. Sometimes, these can be salvaged by improving spending efficiency. At other times, we should just admit that we have low State capacity in India and shut down the spending program.

    How big should the State be? We should increase spending as long as the marginal gains to society exceed the MCPF. As the MCPF in India is high, this implies that the optimal scale of spending in India should be lower. Evaluating the possibility of a large State is the luxury for people who live in countries where the MCPF is low.

    The free rider problem with sub-national governments. In India, the dominant source of resourcing of sub-national governments is central funds. In this case, if I am one state in India, it's efficient for me to advocate bigger expenses, as I don't pay the full cost of distortions experienced by the country. My marginal gains from spending one more rupee are mine, while the MCPF is imposed on the full country. To say this differently, Greece always wants more expenditure by the EU. Hence, sub-national governments should not have a say in the overall size of government. This perspective implies that in the new GST Council, the two-thirds vote share of states will generally favour a higher GST rate.

    Monday, August 22, 2016

    Interesting readings

    Who is afraid of algorithmic trading? by Ajay Shah in The Business Standard, 22 August.

    Recording each vote by M.R. Madhavan in The Hindu, 22 August.

    Creating space for financial services by Varad Pande, Nirat Bhatnagar and Raahil Rai in The Mint, 22 August.

    Why we need to stand up for the right to insult religion and beliefs by Sunny Hundal in The Hindustan Times, 21 August.

    Repeal the sedition law by Sudeep Chakravarti in The Mint, 19 August.

    GST bill a historic landmark but formidable challenges lie ahead by Sudipto Mundle in The Mint, 19 August.

    Tata vs DoCoMo: Two warring partners and one big mess by Deepali Gupta and Arijit Barman in The Economic Times, 18 August. Also see earlier work from NIPFP: link, link, and Treat the disease, not just the symptoms by Bhargavi Zaveri and Radhika Pandey in The Business Standard, 10 August.

    Farm Policy: The political economy of why reforms elude agriculture by Pravesh Sharma in The Indian Express, 18 August.

    Tracing GST's evolution as an idea by M Govinda Rao in The Business Standard, 18 August.

    Maharashtra to give land for mandis by Sandip Das in The Financial Express, 17 August.

    Fractured lands: How the Arab world came apart by Scott Anderson in The New York Times Magazine, 15 August.

    The Tor Project's social contract: we will not backdoor Tor by Cory Doctorow in The Boingboing, 12 August.

    App-only bank Mondo just got a banking licence by Oscar Williams-Grut in The Business Insider India, 11 August. In India, life is hard.

    "A Honeypot For Assholes": Inside Twitter's 10-Year Failure To Stop Harassment by Charlie Warzel in The Buzz Feed News, 11 August.

    Trai & transparency: The irregular regulator by Smriti Parsheera and Bhargavi Zaveri in The Economic Times Blogs, 10 August.

    The Sporting Spirit by George Orwell.

    National Agriculture Market and the political economy of agriculture marketing by Pravesh Sharma in NIPFP YouTube Channel.

    Saturday, August 20, 2016

    How can financial regulators combat mis-selling? Five solutions

    by Monika Halan and Renuka Sane.
     
    In a previous article (Banks are unfair in their role as financial advisors / distributors), we described our audit study on the sale of financial products across 400 bank branches in Delhi, India (Halan and Sane, 2016). In this paper, we found three things:

    1. Customers are mostly sold insurance products by private sector banks, and fixed deposits by public sector banks, regardless of what they ask for. This suggests that in private sector banks that have high sales incentives, the high commission bearing product is mostly recommended. In public sector banks, where there are deposit mobilisation targets, fixed deposits are mostly recommended. In neither case is there concern about the best interests of the customer.
    2. Complex features of a product such as cost are rarely voluntarily disclosed. This suggests that when customers don't know what they don't know, it is likely that material information will be shrouded.
    3. When bank managers do answer questions on product features, most of the information is either incorrect, or incomplete. This suggests that either managers themselves do not know, or do not care, or deliberately mislead customers.

    Regulators around the world, including countries such as India, have responded to problems of mis-sales in retail finance by strengthening consumer protection regulations in the form of disclosure standards, ban on commissions and volume based payments, and suitability requirements in the sale of products. Regulators might require sales staff to disclose product features, but as we show, have little control over whether they are actually disclosed, and importantly, disclosed truthfully. How can financial regulation do better?

    Solution 1: Refocus financial literacy upon distributors


    The qualitative study of the audit reports showed the lack of financial knowledge of the sellers of these products. For instance, public sector bank managers, whose banks have tie ups with asset management and insurance firms, did not know the basics about the products they were selling. Some others were not even familiar with some product names. Most bank officers were not clear about the costs or other complex features. We also noticed that the more complicated the product, the worse were the disclosures reflecting the lack of distributor education by manufacturing firms. The policy debate in India has always emphasised financial literacy of the customer. We think that financial literacy efforts should be made for distributors, and not just consumers.

    For an analogy, before an employee of a financial firm gets a password from NSE or BSE for trading derivatives, that employee must pass an examination which establishes a certain foundation of knowledge about derivatives. The institutional mechanism for these examinations was built by NSE before derivatives trading was launched in 2000. Similar work is required by RBI, to ensure employees of banks have a minimum standard of knowledge about finance.

    Solution 2: Align incentives


    A front-line staff person, when selling the product that is part of his target, is doing what any rational economic agent will do: maximising his own benefit. One way to ensure that the front-line staff do the right thing by the customer is to redesign the incentives of the front-line seller. Regulations must reshape the incentives so as to align the interests of the product manufacturer, the seller and the customer.

    Solution 3: Fix Board accountability


    Regulators need to understand that the front line staff is simply responding to incentives embedded in the organisation. These incentives are established by the senior management who report into the CEO. The CEO gets her direction from the Board of Directors. Some large life insurance firms have changed the product mix they offer to investors and have invested heavily in technology to prevent large scale mis-selling of their products because of a push by the board of directors who were concerned about reputational damage to the overall business group, owing to mis-selling of life insurance.

    Solution 4: Data in the hands of regulators


    The insurance industry has come together to set up a data repository to fight fraud. It has hired Experian to create a fraud monitoring network that will identify potential fraud by screening customer applications. The industry fears a rise in fraud, due to the new regulator rules that prevent them from refusing claims after three years of the policy being alive, no matter if the fraud is proved.

    There is a need to up our aspirations for knowledge and research on consumer protection by financial agencies. This requires database building. As an example, in the UK, every financial firm submits data about every retail financial product sale, to the regulator (the FCA), every day. This makes it possible for FCA to watch patterns in product sales. The FCA links up this data to an array of 3rd party databases about individuals and neighbourhoods, and has systems which raise an alarm about potential mis-selling. In India, this requires building the Financial Data Management Centre (FDMC), writing regulations that require electronic submission of this data into FDMC by all financial firms, and then creating research capabilities in financial agencies to use this data as the UK FCA does.

    Solution 5: Use mystery shopping


    Mystery shopping is an accepted regulatory tool to discover market failure in developed markets. For example, the UK regulator the Financial Services Authority (FSA) in 2005 responded to concerns expressed by the media, consumer bodies and other organizations about the sales process of the payment protection insurance (PPI) by commissioning a mystery shopping exercise. A mix of phone calls and face-to-face audits got shoppers to pose as customers looking for a particular financial product. 32 of the 52 shoppers felt that they received 'limited information' about the PPI product.

    The results of these exercises helps feed into regulatory action. PPI mis-selling cost banks millions of pounds in return of premium and in compensation. The no-commission structure of the UK retail finance market has partly emerged out of evidence gathered through such exercises. Indian regulators need to use such exercises to prevent tick-box regulatory compliance practiced by financial services firms. Such mystery shopping projects must be done using the resourcing of the State; it should not require researchers like us to fund-raise for and manage these research projects.

    Conclusion


    Indian policy makers have long bemoaned the fixation of the Indian household with gold. They need to look at the attraction to gold and real estate as a vote of no-confidence on the financial sector because of the recurrent scams in the market and obvious regulatory failure in preventing even regulated firms in cheating households of their savings - as was seen in the ULIP scam (Halan, Sane and Thomas, 2014). A fair market place in retail finance that works for all the three participants - the manufacturer, the distributor and the buyer - will transform the Indian financial market. The savings rates of Indian households are very high, but only with better consumer protection will we get large-scale participation by households in the formal financial system. The policy proposals of this article are aimed at building this trust, while having the dynamism of the market economy. They are part of the larger project of putting consumer protection at the heart of financial regulation.

    References


    Halan, Monika, Renuka Sane and Susan Thomas (2014), The case of the missing billions: Estimating losses to customers due to mis-sold life insurance policies, Journal of Economic Policy Reform, October 2014.

    Halan, Monika and Renuka Sane (2016). Misled and mis-sold: Financial misbehaviour in retail banks? NSE-IFF Working paper.


    Monika Halan is a consulting editor, Mint, and a consultant at NIPFP. Renuka Sane is a researcher at the Indian Statistical Institute, Delhi, and a visiting fellow at IDFC Institute.

    Saturday, August 13, 2016

    Banks are unfair in their role as financial advisors / distributors

    by Monika Halan and Renuka Sane.

    A major weak link in financial regulation in India is the lack of emphasis on consumer protection. An academic literature on this subject has been building up. The policy discourse has also shifted considerably, and the contours of the policy research and action program are now visible.

    For many years, regulators were in denial about these problems. This has started changing. A committee formed by the insurance regulator on the sale of insurance products through banks has also admitted to mis-selling through banks (IRDA, 2011). More recently a circular on August 1, 2016 by IRDA (IRDAI, 2016) has warned banks and corporate agents to stop mis-selling life insurance policies.

    The key questions for the research community are about obtaining objective evidence about the problems in the field of households and finance, which can then feed into the work program on financial sector policy. A key missing link in this is an understanding of how sales actually take place.

    In a financial product, a critical aspect of the sale is the disclosure made at the time of sale since the product is invisible and the moment of truth of the product can be far into the future. Regulators might require sales staff to disclose product features, but have little control over whether they are actually disclosed, and importantly, disclosed truthfully. There is weak evidence about whether agents intentionally or otherwise make mistakes in the disclosures. This can have large consequences, especially in environments such as India, where financial literacy is low, and regulatory enforcement is weak.

    Financial products are sold through many channels in India. Understanding the sale of financial products through the banking channel is important for three reasons:

    1. There has been a rise in third-party distribution through the banking channel. In 2014-15, of the top ten mutual fund distributors on the basis of commissions earned, six were banks (Barbora and Viswanathan, 2016). In the case of insurance as well, banks had the largest share of new business premium for private insurance companies, though the state owned insurer continues to be agency dependent for sales. For example, banks became the largest sales channel for private sector life insurance companies by financial year 2015. The share of first year premium from banks rose from 33.21% in 2010-11 to 47.37% in 2014-15 for the private sector insurance companies. Commissions from sale of third-party products contribute substantially to bank profitability (Balaji and Bhaskaran, 2015).
    2. A 2013 Gallup Poll showed that 70% of the Indians polled said they trusted banks. The answer was 13% for Greece, 27% for the UK and 37% for the USA. Depositors' trust in the basic banking function is being carried over when buying third party products through banks such as mutual funds and insurance.
    3. Recent financial inclusion efforts in India, such as the Jan Dhan Yojana, or the Suraksha Bima Yojana and the Jeeven Jyoti Bima Yojana are being made through the banking channel. Instead of improving access to finance, a bank led sales strategy, if it consists of mis-selling, may result in driving customers further away from it. 

    A new audit study


    In Halan and Sane (2016), we conducted 400 audits of the sales process of retail financial products in banks in Delhi, India. This is done by sending auditors into bank branches where they claim to be walk-in customers. Neither the auditors, nor the bank managers knew the true motivation behind the study, or the choice of questions.

    We investigate what financial products are being recommended by bank managers to walk-in customers. We also study the kinds and veracity of disclosures made in the process of sale. Specifically we ask:

    1. What products do bank based managers recommend? How does this vary when the auditor makes a specific request vs. when the auditor appears uncertain? Are auditors who make specific requests, and are more certain of their requirements able to purchase the product of their choice?
    2. What product features get disclosed? Do the more salient attributes of a product, such as returns, get disclosed more frequently, while complex product features such as costs, or charges on early exit get shrouded?
    3. Are these disclosures accurate?
    4. What might the drivers of product recommendations be? When remuneration is tied to sales-linked bonuses, are the most expensive products sold?

    We vary our audits to include informed and uninformed customers with different amounts to invest. An informed customer in the research design requests for the Equity Linked Saving Scheme (ELSS), which is an open-ended diversified equity mutual fund with a three year lock-in. In other cases, the customer (the auditor) acts uninformed, and displays a vague sense of wanting some tax-saving product, but does not know which one. We also vary the amount available for investment. In some cases the request is for investing Rs.25,000 in either the ELSS or a tax-saving product. In other cases the amount to be invested is Rs.100,000.

    In an ideal world what would we see? In the ideal world, bank managers will sell the product requested by the customer (in the case of the ELSS) either because it is a sound investment or because they are merely acting as distributors of the product and not as financial advisors. When the customer does not have a view on the product, the bank managers should make an effort to sell the more suitable product, or at the very least show all possible products so that the customer can make an informed choice. If, on the other hand, bank based advisors are not working in the interest of the customer, they will try to steer both types of customers towards the product of their choice.

    Bank managers should provide correct and complete information about a product to the customer, especially because SEBI, IRDAI and RBI have regulations that require such disclosures. We, therefore, also study the kind of product features that get disclosed in the process of sale, and the veracity of these disclosures. Our focus is on the following aspects: a) Returns, b) Guarantees, c) Costs, d) Lock-in period and e) Optimal holding period.

    A criticism against the ELSS as the choice of the ``sophisticated'' investor is that it is a market linked product, and it is likely that for many investors a guaranteed product such as a fixed deposit or an insurance plan is more appropriate. While there is merit in this argument, our evaluation of product recommendations does not really rely on the ELSS being the optimal product. If bank managers feel that the ELSS is not the most suitable product, then we should see this in the conversations they have with the auditors, as well as the recommendations they finally make. The focus of the experiment is not so much about which is the better product, but about the process in which a product is sold.

    Results


    Our experiment shows the following:

    1. Bank managers don't really make an effort to understand the client. Managers in public sector banks are less proactive in understanding the client and exert less effort.
    2. Overall, fixed deposits were the most recommended product (51% of the cases), followed by insurance (35%) and mutual funds (8%). When auditors did not have a specific product request and asked for any tax-saving product, mutual funds were recommended 2% of the time, while fixed deposits and insurance were recommended 53% and 36% of the time respectively.
    3. In private sector banks, where internal incentives are around commission income, the high commission product (i.e. insurance) is recommended most of the time (almost 75%). In public sector banks, where there are deposit mobilisation targets, fixed deposits are recommended (almost 72%).
    4. Of those who requested an ELSS product, only 14% were encouraged to buy it. 30% were actively discouraged, and 55% were presented with a neutral response. However, in 71% of the cases where the bank manager was neutral to the ELSS product in the beginning, our auditors later noted that the manager steered the conversation to other products, resulting in a product recommendation different from the ELSS.
    5. This seems to be because in several cases the bank managers themselves do not know what an ELSS is.
    6. Managers seem to be overly concerned about our auditors having to deal with risk in their portfolio in the context of the ELSS. However, a large proportion of the recommendations were ULIPs, which are also market linked or towards participating insurance plans, that too are partially market-linked.
    7. Voluntary disclosures concentrated around returns and guarantees. Customers were never made aware of the costs of the product, if they did not ask a question about costs.
    8. A large proportion of the disclosures were incorrect, when tested against actual information in product brochures or actual past returns.
    9. This suggests a market with two extremes. The private sector prescribes the most expensive products, while the public sector prescribes the least effort default product. In either case, unbiased financial advice in the interest of the customer seems to be missing. This is reminiscent of the situation of health care in India - where the private sector makes more of an effort and prescribes more drugs (often to the detriment of the patient), while the public sector does less of both (Das and Hammer, 2007). 

    Conclusion


    We find that in private sector banks, where staff have high sales incentives, the high commission product is recommended. In public sector banks, where there are deposit mobilisation targets, fixed deposits are recommended. We also find that the more complex features of a product, such as costs and optimal holding period, are very rarely voluntarily disclosed. When specifically requested, information provided is inaccurate or incomplete.

    It is possible that bank managers themselves do not know the product features to be able to disclose them correctly, or that they perceive that customers are impatient and do not want to listen. However, if regulations require managers to make disclosures, then their own ignorance, or inability to engage with an impatient customer require regulatory attention.

    Our results point to the difficulties in the use of disclosures for achieving better consumer outcomes. Even if disclosures are made mandatory on product brochures, it is unlikely that they get conveyed to the customer in the correct manner. Regulators have taken the view that since the customer has signed on the documents, the customer is responsible for the purchase. The problem is made worse due to the lack of fixing responsibility on the sales channel for mis-sold products. Unless there is a mechanism of enforcement, a disclosure policy is unlikely to help achieve better outcomes.

    Households are not being treated well by Indian finance. Much more needs to be done by way of the academic research agenda and the policy research agenda.

    References


    Balaji, Kavya and Deepti Bhaskaran (2015). Why banks resort to misselling. Mint, 22 December 2015.

    Barbora, Lisa Pallavi and Viviana Vishwanathan (2016). Tough to separate sales from advisory. Mint, 28 April 2016.

    Das, Jishnu and Jeffrey Hammer (2007). Money for nothing: The dire straits of medical practice in Delhi, India. Journal of Development Economics 83, pp. 1-36.

    Halan, Monika and Renuka Sane (2016). Misled and mis-sold: Financial misbehaviour in retail banks? NSE-IFF Working paper.

    IRDA (2011). Report of the Committee on Bancassurance. Committee Report. Insurance Regulatory and Development Authority

    IRDAI (2016). Complaints of Misselling /Unfair Business Practices by Banks/NBFCs, Ref:IRDA/CAGTS/CIR/MSL/152/08/2016


    Monika Halan is a consulting editor, Mint, and a consultant at NIPFP. Renuka Sane is a researcher at the Indian Statistical Institute, Delhi. We thank the NSE-IFF initiative on household finance for funding.