Search interesting materials

Monday, September 19, 2016

Interesting readings

An impressive policy paper: Self trading is not synonymous with market abuse by Nidhi Aggarwal, Chirag Anand, Shefali Malhotra, Bhargavi Zaveri, 29 June 2015, and an impressive response by SEBI today: Sebi reverses stand on self-trades by Jayshree P. Upadhyay in Mint.

Finance for the poor: policy and not programs by Ajay Shah in The Business Standard, 19 September.

Stopping the clock going back in Singur by Ashok K Lahiri in The Business Standard, 15 September.

SEBI's Flawed Attempt at Setting the Frequencies Right by Kanad Bagchi in The Wire, 15 September.

Accountability of justice by K. Parasaran in The Indian Express, 14 September.

Top Russian anti-corruption official had $120M in cash in his apartment by Cory Doctorow in The Boing Boing, 11 September. This reminds us about the fallacy of searching for something like a Lok Pal who will solve all our problems. This also made me find out the Weight of a million dollars by Richard-ga in Google Answers., 05 Feb 2006, and brings a new perspective to illicit money movements in the context of Faulty tradeoffs in security, 10 May 2010.

A note for Dr Patel by Ila Patnaik in The Indian Express, 9 September.

Before amending the law by Abhinav Kumar and Sudhanshu Sarangi in The Indian Express, 9 September.

NCRB data: handle with care by K. P. Asha Mukundan in The Hindu, 9 September.

Air pollution cost India 8.5% of its GDP in 2013: study by Dipti Jain in The Mint, 9 September.

Tariffs Do More Harm Than Good at Home by Maurice Obstfeld in The iMF direct Blog, 8 September.

The need for a 'totaliser' revolution by Sanjay Kumar in The Hindu, 7 September.

China's Summer of Discontent by Elizabeth C. Economy in The CFR Blog, 6 September.

Not majority vs minority by Pratap Bhanu Mehta in The Indian Express, 6 September.

Vaccination isn't just for babies by Sujata Kelkar Shetty in The Mint, 5 September.

No harm in pre-consultation by Somasekhar Sundaresan in The Business Standard, 5 September.

The house is on fire! by Gary Saul Morson in The New Criterion, September 2016.

Myths and realities about America's infrastructure spending by Edward L. Glaeser in The City Journal, Summer 2016.

Aadhaar by Numbers by Sunil Abraham in NIPFP YouTube Channel.

Academic snobbery about top journals and top universities is under serious attack. Even without retractions top journals publish the least reliable science in The Bjoern Brembs Blog, January 2012. Why might this be happening? The top journals probably encourage research that is more conscious of fashion, does more p-hacking, with authors who do more social engineering. Also see.

How to test your decision-making instincts by Andrew Campbell and Jo Whitehead in The McKinsey Quarterly, May 2010.

Friday, September 16, 2016

Arriving at the correct value of the rupee

by Ajay Shah.

A recent front page story in the Indian Express came as a surprise examination for many economists in India. When currency policy is proposed, four ideas are useful:

  1. Nobody knows what is the correct exchange rate. Asking a government official the correct price of the rupee is as pointless as asking him the correct price of steel or the correct level of Nifty.
  2. We were once in a complicated world where RBI openly said that it had no framework. RBI governors heard pleas from importers and exporters, played favourites, and earned political capital. That period (1934-2015) is now behind us. Now, for the first time, RBI is accountable. It has an objective: inflation. The instrument (control of the policy rate) is used up in giving us the outcome (4% inflation).
  3. Chasing an exchange rate objective can lead to small problems (e.g. the exchange rate management of 2002-2007 kicked off an inflation crisis from 2006) or big problems (the rupee defence of 2013). Wisdom in public policy involves avoiding such adventurism.
  4. While an inflation targeting central bank should not pursue exchange rate policy, the exchange rate is an important input for an inflation targeting central bank. Changes in the exchange rate feed into domestic inflation through the price of tradeables. Thus, changes in the exchange rate are a useful input for forecasting inflation. The essence of good monetary policy is forecasting inflation [example]. RBI should consume the exchange rate, made by the market, as an input into its monetary policy process.

Thursday, September 15, 2016

Fiscal consequences of shifting an inflation target from 2% to 4%

by Ajay Shah.

Most advanced economies have a nominal anchor for monetary policy in the form of an inflation target at 2%. This has presented difficulties when the policy rate hits 0%. This calls for using a new and more unpredictable tool -- quantitative easing -- or finding ways to force the short rate below zero. Both are difficult.

Some people are proposing that the inflation target should be raised to 4%. This possibility is being posed as a choice between two unpleasant things. On one hand, the smooth working of the economy will be impeded by higher inflation, but on the other hand we have to deal with the zero interest rate lower bound. Ben Bernanke's recent blog article is an example of this debate.

In addition to these arguments, there is a fiscal perspective that needs to be brought on the table.

Suppose we suddenly raise the inflation target from 2% to 4%. Suppose there is no disruption, everything works out smoothly. In the ideal scenario, the yield curve should parallel shift up by 200 bps at all maturities.

This would be bad news for persons holding nominal bonds issued by the government, persons holding nominal pensions, nominal bonds issued by private corporations, etc.

A person who has a nominal pension backed by a corporation will be angry about it. But there will be nothing she can do about it. Persons who hold claims upon the government would not accept these losses lying down. They would organise themselves politically and ask for compensation for the losses they would face if such a decision were taken.

How large are the magnitudes? Suppose a country has explicit nominal government bonds and implicit nominal pension debt adding up to 100% of GDP. Suppose this has an average maturity of 10 years. The 200 bps parallel shift of the yield curve would impose a loss of 20% which works out to 20% of GDP. There is no democracy in which monetary policy wonks are going to be able to impose a cost of 20% of GDP upon some people without a political fight. A negotiation would take place where the adversely affected persons will ask for compensation.

This negotiation will be a difficult one. As an example, envision the US Treasury, the US Fed, and bondholders sitting in a room arguing about 20% of GDP. Things become more difficult in countries where the government owes nominal defined benefit pensions.

If the negotiation works out smooth and clean, the debt/GDP of the country goes up by 20 percentage points. This will make bondholders and credit rating agencies more nervous about the fiscal solvency of the country. While some countries (e.g. Australia) have good fiscal health, most advanced economies do not.

The last and most troublesome issue is that of credibility and confidence. Many advanced economies have a difficult fiscal situation, particularly when off-balance-sheet liabilities are counted. The bond market has generally been quite well disposed towards these countries; e.g. the bond market assumes the US will solve its fiscal crisis, even though nobody can see how this would be done. One key element of this confidence on the part of the bond market is: trust in the 2% inflation target. As fiat money is anchored with a 2% inflation target, the fiscal authority cannot inflate away debt by using inflation surprises. This reassures bond holders who are then willing to lend money to the sovereign at low interest rates.

Suppose the negotiations associated with the increase in the inflation target don't work out well. Some bondholders walk away feeling they were unfairly forced to accept a loss. There will be less trust the next time around. The bond market will not trust the 4% inflation target in the way it has come to trust the 2% inflation target. It will demand a risk premium in exchange for bearing the risk that the institutional mechanism of monetary policy is not trusted for decades and decades to come.

For some advanced economies, under certain kinds of mishandled negotiations, the project of trying to raise the inflation target from 2% to 4% could lead to a sharp one-time increase in the debt/GDP ratio and a higher required interest rate for government debt. These two outcomes could significantly worsen the fiscal situation for the government.

These considerations should be brought into the picture when evaluating the costs and benefits of raising the inflation target from 2% to 4%.

On related issues, this article from June 2009 has worked out reasonably well. One change that intervened was that the US moved closer to formal inflation targeting in 2012, thus removing some of the concern.

I acknowledge useful discussions with Josh Felman on these issues.

Saturday, September 10, 2016

The great Indian GDP measurement controversy

by Rajeswari Sengupta.

In 2015, the Central Statistical Office (CSO) revised the way GDP is calculated in India. According to the new series, India is the fastest growing large economy in the world, with a 7.1 percent real growth rate. Other trusted measures of the state of the economy convey a discordant picture. This discrepancy has led to an active debate comprising two parts. One part of the debate has been arguments about the extent to which the official GDP data are accurate. The second part of the debate is based on criticising CSO's methods.

This article summarises the literature that examines CSO's methods. There are two main areas of concern: the way manufacturing Gross Value Added (GVA) has been estimated and the methodology for calculating deflators.

Manufacturing GVA

The manufacturing sector has been at the centre of the GDP debate. The methodological changes for this sector and consequently the data revisions have been substantial. Manufacturing growth for 2012-13 was revised up from 1.1 percent to 6.2 percent, while that for 2013-14 was increased from -0.7 percent to 5.3 percent. Various authors (Nagaraj, 2015a, 2015b, 2015c; Rajakumar, 2015; Nagaraj and Srinivasan, 2016; Sapre and Sinha, 2016) have questioned the reliability of the new estimates, on several grounds.

Enterprise vs. Establishment approach: In a major innovation, the new GVA methodology shifted data collection from establishments (or factories) to enterprises (or firms). Sapre and Sinha (2016) point out that lack of clarity on measures of output and costs at the enterprise level can lead to imprecise estimates of GVA. The activities of firms can be much more diverse than those of factories, and not all of these functions would qualify as manufacturing. Yet all the value added of enterprises classified as "manufacturing firms" has gone into the calculation of manufacturing GVA. This will inflate the level of output and possibly also the growth rate, if the ancillary activities are growing faster than the manufacturing ones.

Blowing up of GVA: Extrapolating from samples ("blowing up") is not a new feature of the current GDP series. What has changed is the database used. Previously, manufacturing GVA was based on the RBI's fixed sample of large private companies. Under the new series, the MCA21 database is used to compile a set of "active" companies, which have filed their annual financial returns at least once in the past three years. The problem is that for any given year, information from several active companies remains unavailable till a cut-off date of data extraction. In such a case, the GVA of available companies needs to be blown-up to account for the unavailable companies. There are multiple issues in this blowing-up method.

The year wise number of available and active companies in manufacturing is not publicly available. Hence, year on year, the exact number of companies for which the GVA is blown up is unknown.
While the Ministry of Corporate Affairs has made filing of annual financial returns mandatory for all registered companies, it is not known how many of these companies produce any output on a regular basis.

The blowing-up factor is the inverse of the ratio between the paid-up capital (PUC) for the available companies and that for the active set as a whole. Nagaraj (2015a, 2015b) argues that this is inappropriate since a large fraction of the MCA21 active set are "fictitious, shell companies" that exist only on paper. In that case the blowing-up method is likely to overestimate GVA.

Sapre and Sinha (2016) argue that blowing-up using the PUC is an inappropriate method because PUC and GVA do not have any one-to-one relation. Also, it is possible that the actual GVA of some "active but unavailable" companies is negative for a particular year. In those cases, blowing up of GVA using the PUC factor method can lead to overestimation.

The actual computation of the blowing-up factor applied by the CSO in the new series has not been described in detail in the official documents. This makes it difficult to replicate the process and analyse it.

A single blowing-up factor has been used for private as well as public limited companies. Rajakumar (2015) points out this is not appropriate as the two groups are widely divergent in their patterns.

The number of "available" companies reporting their annual financial returns with MCA varies across the years. As a result, the blowing-up factor that accounts for the non-reporting companies will also vary from year to year. As highlighted by Nagaraj (2015a, 2015b) and Sapre and Sinha (2016), this variation will result in wide fluctuations in the final GVA estimates.

Identification of manufacturing companies: Sapre and Sinha (2016) find that within the manufacturing sector several companies operate as wholesale traders or service providers. These companies may have changed their line of business since they were originally registered. These changes do not get reflected in the Company Identification (CIN) code assigned to the companies. Such misclassification of companies will distort the manufacturing estimates, although not the overall GVA.

MCA 21 vs. IIP : There are other problems with the manufacturing GVA calculation that have not been written about much. For the manufacturing sector, the GVA is derived from a combination of MCA 21 numbers, Index of Industrial Production (IIP) estimates and estimates of the unorganized sector from the Annual Survey of Industries (ASI). While the MCA21 is a new database, the base year for the IIP data is still 2004-05. Also the data obtained from MCA 21 follows an "enterprise" approach as mentioned earlier, but the data obtained from ASI follows the old "establishment" approach. The full implications of these discrepancies are yet to be fully understood.


Previously, estimates of real GDP relied heavily on production indices such as the IIP. Now, most real numbers are derived by taking nominal data and deflating them by price indices. If done well, this approach can give a more accurate measure of value added. But if the deflators used are inappropriate, the estimated real magnitudes will be distorted. And this may well have happened in the past few years, since there have been very large changes in relative prices (especially petroleum and other commodities), which are inherently difficult to capture in aggregate deflators. The issues here are as follows.

Double deflation: In most G20 countries, real manufacturing GVA is computed using a methodology known as double deflation. In this method, nominal outputs are deflated using an output deflator, while inputs are deflated using a separate input deflator. Then, the real inputs are subtracted from real outputs to derive real GVA. But in India things are done differently. Here, we compute the nominal GVA, and then deflate this number using a single deflator.

If input prices move in tandem with output prices, both methodologies will give similar results. But if the two price series diverge- as they have for the past few years in India- single deflation can overstate growth by a big margin.

The reason is not difficult to see. If the price of inputs falls sharply, profits will increase, and nominal value added will go up. Since real GDP is supposed to be measured at "constant prices", this increase needs to be deflated away. Double deflation will do this easily. But single deflation will not work. In fact, if a commodity-weighted deflator like the Wholesale Price Index (WPI) is used, as is the case under the current methodology, nominal growth will be inflated, on the grounds that prices are actually falling! In this case, real growth will be seriously overestimated. A fuller explanation is provided here.

As the gap between input and output inflation starts to close, the problem will diminish. But that could also send a misleading signal, because it might seem that growth is slowing, when only the measurement bias is disappearing.

Service sector deflator: Deflator problems also plague the estimates for the service sector, which accounts for the bulk of GDP. Currently, the deflator used for much of this sector is the WPI. But the weight of services in the WPI is negligible. If instead the services component of the Consumer Price Index (CPI) were used, growth in this sector would be far lower than currently estimated.

WPI vs. CPI : Finally, there are questions about whether the WPI should really be used as a deflator, at all. The weights are now more than a decade old, and India's economic structure has changed radically over this period. In addition, the sample frame (the selection of goods sampled) is also out of date. The CPI is a better price index [link, link].

Potential refinements

Based on the foregoing, a number of refinements to the GDP methodology could be considered:

  • Releasing disaggregated information on firm output and cost items, to permit more precise estimation of manufacturing GVA given the shift from the establishment to the enterprise approach.
  • Altering the definition of the active set of manufacturing companies, to ensure the companies are truly active.
  • Releasing the number of active and available companies every year by industry or sector, to get a sense of the companies contributing to GVA.
  • Shifting the blowing-up factor from paid-up capital to another indicator, such as replacing growth rates for "active but unavailable" companies by the overall growth rate for the relevant subsector.
  • Using separate blowing-up factors for public and private limited companies. Currently the blowing-up factor does not take into account the size, industry or ownership of the unavailable companies.
  • Reviewing the classification of companies to ensure they are categorized appropriately.
  • Providing greater clarity and transparency about the database and methodology used to estimate the manufacturing sector GVA. Also, documents could be released explaining the precise method used to blow up the GVA estimates.
  • Adopting the double deflation method to calculate real manufacturing GVA.
  • Using the relevant CPI components to deflate service sector GVA.
  • More generally, the WPI could be replaced by the relevant CPI components, in the long period before a Producer Price Index (PPI) is developed which would be an ideal deflator.

Until this methodological debate subsides, official GDP data should be used with caution as it may not accurately reflect conditions in the economy. Other proxies for output are required.


I thank Josh Felman, Deep Mukherjee, R. Nagaraj, Amey Sapre, and Pramod Sinha for useful conversations


Nagaraj, R. (2015a), Seeds of doubt on new GDP numbers Private corporate sector overestimated?, Economic and Political Weekly, Vol. L, No. 13.

Nagaraj, R. (2015b), Seeds of doubt remain: A reply to CSO’s rejoinder, Economic and Political Weekly, Vol-L, No. 18.

Nagaraj, R. (2015c), Growth in GVA of Indian manufacturing, Economic and Political Weekly, Vol-L, No. 24.

Nagaraj, R. and T.N. Srinivasan (2016), Measuring India’s GDP Growth: Unpacking the Analytics & Data Issues behind a Controversy that Refuses to Go Away, India Policy Forum, July 2016.

Rajakumar, J Dennis (2015), Private corporate sector in new NAS series: Need for a fresh look, Economic and Political Weekly, Vol-L, No. 29.

Sapre, Amey, and Pramod Sinha (2016), Some areas of concern about Indian Manufacturing Sector GDP estimation, NIPFP Working Paper 172, August 2016.

The author is a researcher at the Indira Gandhi Institute of Development Research.

Author: Rajeswari Sengupta

Rajeswari Sengupta is a researcher at the Indira Gandhi Institute of Development Research.

Home page.

On this blog:
  1. How will IBC 2016 deal with existing bank NPAs?, 5 December 2016. 
  2. Demonetisation needs a Parliamentary law to be fool-proof, 1 December 2016. 
  3. Drafting hall of shame #2: Mistakes in the Insolvency and Bankruptcy Code, 18 November 2016. 
  4. The great Indian GDP measurement controversy, 10 September 2016.
  5. Measuring the transmission of monetary policy in India, 1 September 2016.
  6. Analysis of the recent proposed SARFAESI amendments : are these consistent with the Insolvency and Bankruptcy Code?, 29 May 2016.
  7. Firm insolvency process: Lessons from a cross-country comparison, 22 December 2015.
  8. Bankruptcy reforms: It's not the ranking that matters, 13 November 2015.

Friday, September 09, 2016

UIDAI's 4th big public policy innovation: Build forts, not empires

by Praveen Chakravarty.

The insightful paper, and accompanying blog article, by Ram Sewak Sharma about three big innovations in UIDAI got me thinking about my own UIDAI experience. What were the key innovations which made it work out well, especially as viewed from the lens of the private sector? What lessons can we take away? In addition to his three big ideas, I have one more.

`Asset light business' is the new buzzword among investors, especially venture capital and angel investors. The world’s largest taxi company owns no taxis – Uber; the world’s largest room provider owns no hotel rooms – AirBnb; the world’s largest movie house owns no movies – Netflix. This is the popular refrain among the fraternity of modern day startups and their cheer-leading investors. In a similar vein, it can be argued that the world’s largest identity provider owns no identity devices! To top, this was not a traditional start-up in a college dorm by a 20 year old. This is the Unique Identity Authority of India – UIDAI, a staid authority of the staid government of India manned by staid bureaucrats.

On 26 March 2004, Bharti Airtel announced a large, first-of-its-kind outsourcing contract with IBM. It essentially meant that Bharti Airtel, the telecom player will own no telecom equipment nor network hardware or software. It will simply acquire and own customers. IBM, in turn, took on the responsibility of managing all the complex hardware and software required to run a massive telecom operation. To add, IBM was to be paid a percentage of revenues that Bharti Airtel would earn, not a mere fat, flat fee as was the prevailing norm then. Bharti Airtel grew from a 6.5 million subscriber base to 250 million subscribers in 12 years, leaving most of its competitors behind. It is undeniable that this brave decision of Bharti Airtel to smart source its capital expenditure to IBM played a key role in its ability to scale so rapidly, which is probably forgotten in the annals of Bharti Airtel’s success.

It was then dubbed the `capex to opex' transformation by financial analysts such as myself, i.e. converting big sunk costs of capital expenditure to revenue generating operating expenditure, marking significant gains in efficiency and scale. When the cost of capital is high in India owing to capital controls, there is a natural gain when an Indian firm, which suffers from the elevated cost of capital in India, contracts-out the ownership of bulky capital assets to an MNC, which enjoys global levels of cost of capital.

When UIDAI embarked on providing a unique identity to a billion Indians across more than 6.5 lakh villages, the sheer scale was daunting. As the Ram Sewak Sharma paper rightly mentions, there was detailed thought behind the use of iris, field trials to test proof of concept etc. But perhaps the single biggest catalyst in converting this from a grandiose plan into reality was the decision of UIDAI to smart source identity data collection. Surely, the technology industry background of the founding Chairman of the UIDAI played a role in its decision to do a Bharti Airtel in public policy. Nevertheless, in hindsight, this decision to embrace the ‘capex to opex’ theme but adapt it to the Indian public policy environment, in my view, laid the ‘Aadhaar’ for Aadhaar to scale so rapidly.

In an otherwise typical government project, offices would have been set up in every district, personnel would have been hired, biometric scanners would have been purchased and then identity information would have been collected, all by a government body or a clutch of bodies. This would have meant incurring massive upfront capital costs of infrastructure, technology and people. The UIDAI instead tilted it on its head and decided to build an entire ecosystem of private vendors to do the data collection with costs of machine, people and infrastructure borne by the vendor. Essentially, this meant that there were ubiquitous but authorized and approved UIDAI data collection centres and camps that mushroomed all across the country in a short span of time which made it easy for residents to register. But in the public policy world unlike the corporate world, protecting downside risks are far more important than any potential upside gains, i.e. protecting data security of Indian residents is infinitely more important than any efficiency gains of outsourcing to private vendors. This was achieved by establishing strict oversight and control mechanisms that rested entirely with the UIDAI.

The UIDAI exercised stringent control of data encryption and validation. So, while biometric data of a billion Indians were collected by thousands of independent government and private agencies, all of them collected the data through a standardised software provided by the UIDAI that encrypted the data which was then sent back for validation to the UIDAI centre. The UIDAI incurred a cost of roughly Rs.65 for every successful biometric data of an individual. Thus, the UIDAI did not have to put up big financial plans and wait for funding from the Ministry of Finance before it could launch its activities across the country.

This was one of the biggest reasons that UIDAI could go from zero to 600 million unique identities in 4 years flat, perhaps the fastest of any government or even private sector initiatives in recent times anywhere in the world. This was the power of the `capex to opex' or `asset light' innovation in policy implementation. This philosophy of the UIDAI in eschewing the temptation to `build empires' but to `build forts' instead is a philosophy that can serve many large scale government project implementations well.

We in India are gradually developing knowledge about how to build State capacity. Computer technology allows us to leapfrog, and achieve remarkable kinds of State capacity which were otherwise unavailable at our level of per capita GDP. We are crossing this river by feeling the stones. We are building experience, and we are building a literature. A paper by Ajay Shah in 2006, the Tagup report, Nandan Nilekani's book from 2013, the book by Nandan Nilekani and Viral Shah from 2016, the concept of the Financial Data Management Centre (FDMC) in the Indian Financial Code, the concepts of information utilities in the bankruptcy reform, the Ram Sewak Sharma paper, and my one idea in this article: these add up to the emergence of deeply grounded local knowledge on how to do this. These materials are great fodder for thinking, debate, and then actually doing things in India.

The author is Senior Fellow in Political Economy at IDFC Institute, a Mumbai think tank, and former consultant with the UIDAI.

Thursday, September 08, 2016

UIDAI's public policy innovations

by Ram Sewak Sharma.

Unique Identification Authority of India (UIDAI) had the goal of issuing unique identification numbers to every resident of India. In a country as large as ours, this was a difficult task to achieve. UIDAI has largely accomplished this within a short period of about six years. I believe it was able to do this only because it took many innovative and bold decisions. In a recent paper I examine some of these innovations. The paper also tries to derive lessons from UIDAI that could be applied in other government projects.

The Use of Iris Scans

The UIDAI felt that unless iris images were used in addition to fingerprints, it would not be able to fulfil its mandate of unique identification. However, there were many concerns related to the use of iris images. Was this technology mature enough? Was it too expensive? Were there enough vendors in the market to prevent lock-in?

The UIDAI set up a committee to deliberate on the issue of which biometrics to collect and what standards to use for unique identification. This committee recognised the value of using iris images in improving accuracy. However, it fell short of recommending the inclusion of the iris in the biometric set and left the decision to UIDAI.

After a detailed examination, the UIDAI came to the conclusion that the inclusion of iris to the biometric set was necessary for a number of reasons, such as ensuring uniqueness of identities, and achieving greater inclusion. In retrospect, this turned out to be one of the most important decisions of the UIDAI.

On-field trials

The practice of conducting on-field trials was an important innovation. When UIDAI began its mission, there were many questions inside and outside the organisation on whether the very idea of unique identification for every resident was feasible at all. The idea of using biometrics to ensure the unique identification and authentication of all residents in India was an untested one. There were many assumptions behind it, and the data required to test the validity of these assumptions was not available. For instance, most of the research done on using biometrics for identification or authentication was done in western countries, and that too, on relatively small numbers of people.

The knowledge which had been produced by Western researchers was not applicable in the Indian context. Could the fingerprints of rural residents and manual labourers be captured successfully, or would they be excluded from Aadhaar? What about the iris images of old or blind people? Do the devices available in the market serve the purpose? What would be the most efficient and effective way to organise the process of enrolment? These questions needed to be answered if the project was to be successful.

The strategy adopted at UIDAI was to conduct a set of trials (called Proofs of Concept, PoCs) in several states across the country. The areas were selected to be representative of real-life enrolment and authentication. A number of biometric capture devices of different makes were used, and several different enrolment processes were tried out. The PoCs were carefully designed to answer sharply articulated questions, either to verify UIDAI's assumptions, or to capture the data required to fill in gaps in the UIDAI's knowledge. In essence, the scientific method was applied to create the knowledge that was pertinent to the decisions that had to be made at UIDAI. Resources had to be allocated to this work, and in return for that, major sources of project risk were eliminated.

The results of the PoCs indicated that the major hypothesis of the UIDAI was correct: that it was indeed possible to capture biometric data that was fit for the purpose of deduplication and verification. The results also showed that iris capture did not present any major challenges. An efficient enrolment process was devised using the data captured during these trials.


The last innovation considered in the paper relates to competition. Given the scale and importance of the project, the UIDAI felt it was important to increase efficiency and reduce costs by leveraging the competencies available in the private sector. At the same time, it was also essential to avoid a situation where any one private player could exercise significant power over the effective functioning of the Aadhaar system: the Authority wanted to ensure that there was a competitive market for providing services to it. To promote such a competitive market, the Authority used a two-pronged strategy of using open standards (creating standards where there were none), and using open APIs (Application Programming Interfaces).

The Authority used this strategy in procuring vendors for deduplication. Algorithms for deduplication had never been tested at the scale required in this project. To reduce the risk of poor quality deduplication, the UIDAI came up with a novel solution. It decided to engage three biometric service providers (BSPs), instead of just one. These BSPs would interface with the UIDAI systems using open APIs specified by the Authority. This decision helped avoid vendor lock-in, and increased scalability.

The UIDAI selected the three top bidders on the basis of the total cost per deduplication. Even after these three vendors were selected, the Authority was able to set up a competitive market among them, using an innovative system to distribute deduplication requests among them. Vendors were paid on the basis of the number of deduplication operations they were able to carry out, and the Authority allocated operations to them on the basis of how fast and how accurate they were. This led to a situation where the BSPs were constantly competing with each other to improve their speed and accuracy.

Where standards were not present, the UIDAI was willing to create new standards in order to increase competition. At the outset of UIDAI's work, every biometric device had its own interface, distinct from the interfaces of other biometric devices. If a capture application wanted to support 10 commonly used devices, then the application developer would have to implement 10 different interfaces. This would have made it costly to bring new devices into the project, even if these new devices were cheaper and better. In order to avoid this situation, the UIDAI created an intermediate specification. Vendors could implement support for this specification, and their devices could be certified. This allowed all capture applications to work with all certified devices.


The success of the UIDAI offers many lessons for other government projects. Perhaps the first lesson that can be drawn from it is that innovation is indeed possible within the government. Government processes need not prevent it from taking innovative decisions. In fact, processes commonly used within the government, such as expert committees and consensus-based decision-making, can provide methods to examine difficult issues in a credible manner. High-quality procurement and project management skills can help the government outsource many functions that are currently housed within it.

The paper also suggests that scale and complexity need not be deterrents to private sector participation: in fact, the large scale of government projects can make the project more attractive to private parties. Another lesson government agencies could learn from the UIDAI is the need to test major hypotheses through field trials before launching projects at scale. Conducting such field trials provides an opportunity to change the design or the implementation roadmap well in time, thus saving precious public money from being wasted.


The UIDAI could achieve its objective because it adopted a different approach from most government organisations. It took tough decisions, such as the one to use iris images; it expended resources on building pertinent knowledge, by constantly experimenting on the ground and learning from these trials; and it exploited private-sector competition to achieve its task at the lowest cost. It should be noted that this is not an exhaustive list of its innovations, but without these three decisions, it is unlikely the UIDAI would have been able to fulfil its mission.

Even large government projects can be done fast and efficiently. Government processes need not be obstructive. In fact, the mechanisms of bureaucracy, such as committees, adherence to financial regulations, and desire for consensus, can help to resolve difficult issues and take tough decisions. Well-designed pilots and field-tests can help the government evaluate the effectiveness of large programs, so that it can deploy public resources more usefully. High quality procurement and contract-management processes can enable the government to leverage the dynamism of the private sector to provide public goods effectively.


I am grateful to Prasanth Regy and Ajay Shah, both of NIPFP, for stimulating discussions.

The author is Chairman, Telecom Regulatory Authority of India (TRAI) and was part of the founding team at UIDAI.

Wednesday, September 07, 2016

Dating the Indian business cycle

by Radhika Pandey, Ila Patnaik, Ajay Shah.

Most macroeconomics is about business cycle fluctuations. The ultimate dream of macroeconomic policy is to use monetary policy and fiscal policy to reduce the amplitude of business cycle fluctuations, without contaminating the process of trend GDP growth. From an Indian policy perspective, this agenda is sketched in Shah and Patnaik (2010). The starting point of all these glamorous things, however, is measurement. The major barrier to doing Indian macroeconomics is the lack of the foundations of business cycle measurement.

The first milestone in this journey is sound procedures for seasonal adjustment of a large number of macroeconomic time series. At NIPFP, we have built this knowledge in the last decade, and insights from this work are presented in Bhattacharya et. al, 2016.

The next milestone is dates of turning points of the business cycle. As an example, in the US, the NBER produces a set of dates. These dates are extremely valuable in myriad applications. As an example, the standard operating procedure when drawing the chart of a macroeconomic time-series is to show a shaded background for the period which was a contraction. Here is one example: y-o-y CPI inflation in the US, with recessions shown as shaded bars. In the Indian setting, several papers have worked on the problem of identifying dates of turning points of the business cycle (Dua and Banerji, 2000, Chitre, 2001, Patnaik and Sharma, 2002, Mohanty, 2003).

In a new paper (Pandey et. al., 2016) we bring three new perspectives to this question:

  1. In the older period, India was an agricultural economy, and the ups and downs of GDP growth were largely monsoon shocks. It is only in the recent period that we have got structural transformation, and the market process of cyclical behaviour of corporate investment and inventory, which add up to a business cycle phenomenon that is recognisably related to the mainstream conception of business cycles (Shah, 2008). This motivates a focus on the post-1991 period.
  2. We are able to shift from annual data to quarterly data by starting in the mid 1990s.
  3. We have the laid the groundwork for this to be a system, with regular updation of the dates, rather than a one-off paper. 


One approach to business cycle measurement focuses on ``growth cycles'', and relies on detrending procedures to extract the cyclical component of output. The cycle is defined to be in the boom phase when actual output is above the estimated trend, and in recession when the actual output is below the estimated trend. This identifies expansion and contraction based on the level of output. In contrast, the ``growth rate cycle'' identifies turning points based on the growth rate of output. For the post-reform period in India, this is more appropriate.

At an intuitive level, the procedure works as follows. First, we remove the trend and focus on fluctuations away from the trend. Second, we remove the high frequency fluctuations (below two years) and the low frequency fluctuations (above eight years). What's left is in the range of frequencies which are considered `the business cycle'. Third, we identify turning points in this series.

In terms of tools and techniques, we use the filter by Christiano and Fitzgerald. The Christiano Fitzgerald filter belongs to the category of band-pass filters. This is used to extract the NBER-suggested frequencies from two to eight years. To this filtered cyclical component, we apply the dating algorithm developed by Bry and Boschan, 1971.

Our analysis is focused on seasonally adjusted quarterly GDP series (Base year 2004-05). This series is available from 1996 Q2 (Apr-Jun) to 2014 Q3 (Jul-Sep). The CSO revised the GDP series with a new base year of 2011-12. The revised series is available only from 2011 Q2. Hence we stick to the series with old base year for our analysis.


De-trended, filtered, seasonally adjusted real GDP growth

As an example, look at the period of the Lehman crisis. It is well known that the economy was weakening well before the Lehman bankruptcy in September 2008. As an example, INR started depreciating sharply from January 2008 onwards. The evidence above shows that the economy peaked at Q2 2007, and started weakening thereafter.

Each turning point is a fascinating moment. In Q2 2007, i.e. Apr-May-Jun 2007,  growth was good but the business cycle was about to turn. It is interesting to go back into history to each of these turning points and think about what was going then, and what we were thinking then.

Dates of turning points in GDP:1996-2014
Phase Start End Duration Amplitude
Recession 1999Q4 2003Q1 13 3.3
Expansion 2003Q1 2007Q2 17 2.5
Recession 2007Q2 2009Q3 9 2.3
Expansion 2009Q3 2011Q2 7 1.3
Recession 2011Q2 2012Q4 6 0.9

Our findings on business cycle chronology are robust to the choice of filter and to the choice of the measure of business cycle indicator. We conduct this analysis using different measures of business cycle indicators such as IIP, GDP excluding agriculture and excluding government, and Firms' net sales, and find broadly similar turning points. Details about these explorations are in the paper.

A system, not just a paper

This is not a one off paper. We will review these dates regularly and update the files, while avoiding changes in URLs. When the methods run into trouble with future data, we will address these problems in the methods. This work would thus become a part of the public goods of the Indian statistical system.

All key materials have been released into the public domain. In addition to a paper web page, we have a system web page which gives a .csv file with dates at a fixed URL and can be used e.g. in your R programs.

An example of an application

An example of placing recession bars on a graph, of
growth in (non-finance, non-oil) firms net sales

The graph above shows the familiar series of seasonally adjusted annualised growth, of the net sales of non-financial non-oil firms, with shaded bars showing downturns. This series only starts after 2000 as quarterly disclosure by firms only started then. Placing this series (net sales of firms) into the context of the business cycle events gives us fresh insight into both: we learn something about the sales of firms respond to business cycle fluctuations, and we learn something about business cycle fluctuations.

Facts about the Indian business cycle

It is useful to know summary statistics about the Indian business cycle: the average duration and amplitude of expansion and recession and the coefficient of variation (CV) in duration and amplitude across expansions and recessions.

Summary statistics of GDP growth cycles
Exp/Rec Average amplitude (in per cent) Average duration (in quarters) Measure of diversity in duration (CVD) Measure of diversity in amplitude (CVA)
Expansion 2.5 12.0 0.34 0.38
Recession 2.2 9.3 0.31 0.45

The average amplitude of expansion is seen to be 2.5% while the average amplitude of recession is 2.2%. The average duration of expansion is seen to be 12 quarters while the average duration of recession is seen to be 9.3 quarters. These are fascinating new facts in India. There is more heterogeneity in the amplitude of a downturn when compared with expansions.

Changing nature of the Indian business cycle

In recent decades a number of emerging economies have undergone structural transformation and introduced reforms aimed at greater market orientation. There is an emerging strand of literature that studies the changes in business cycle stylised facts in response to these changes. Studies find that business cycle stylised facts have changed over time (Ghate, Alp., 2012). In the paper, we explore some of these changes.

In the post-reform period, both expansions and recessions have become diverse in terms of duration and amplitude. Some episodes of recession are relatively more deeper and severe relative to others in the post-reform period. Similarly there is considerable variation in the duration of expansion and recession across specific cycles in the post-reform period. Some are short-lived while others are relatively more persistent.


Rudrani Bhattacharya, Radhika Pandey, Ila Patnaik and Ajay Shah. Seasonal adjustment of Indian macroeconomic time-series, NIPFP Working Paper 160, January 2016.

Radhika Pandey, Ila Patnaik and Ajay Shah. Dating business cycles in India. NIPFP Working Paper 175, September 2016.

Ajay Shah (2008). New issues in macroeconomic policy. In: Business Standard India. Ed. by T. N. Ninan. Business Standard Books. Chap. 2, pp.26--54.

Ajay Shah and Ila Patnaik (2010). Stabilising the Indian business cycle. In: India on the growth turnpike: Essays in honour of Vijay L. Kelkar. Ed. by Sameer Kochhar. Academic Foundation. Chap. 6, pp.137--154.

Monday, September 05, 2016

Interesting readings

Financial reforms: A mid-term report card by Ajay Shah in The Business Standard, 5 September.

With a new ban on antibacterial soap, the US government is finally acknowledging that it's not just ineffective, it's also dangerous by Elijah Wolfson in The Quartz, 02 September.

Sydney and Melbourne have canceled concerts celebrating Chairman Mao by Isabella Steger in The Quartz, 01 September. I have always been struck by how Hitler is treated differently from Stalin and Mao. You can actually buy Stalin and Mao memorabilia in Moscow and Beijing.

You want to pay the publisher but you don't want the publisher monitoring what you read.  Publishers must let online readers pay for news anonymously by Richard Stallman in The Guardian, 1 September and an open source software system to implement this:  Electronic payments for a liberal society!

Real costs of high-frequency trading by Venkatesh Panchapagesan in The Mint, 31 August.

Day of the specialist by Manish Sabharwal in The Indian Express, 29 August.

Reverse Voxsplaining: Drugs vs. Chairs by Scott Alexander in The Slate Star Codex, 29 August.

Dalits are right: Enough is enough by P Chidambaram in The Indian Express, 28 August.

Perumal Murugan returns by Salil Tripathi in The Mint, 25 August.

Sedition law cannot be used against honest views, expressed peacefully by Soli J. Sorabjee in The Indian Express, 25 August.

Here is proof that banks mis-sell  and  Yes, banks mis-sell. Now what? by Monika Halan in Mint, unveiling the recent Halan & Sane paper.

Why Maharashtra CM Devendra Fadnavis must make new mandis work by Financial express in The Financial Express, 23 August.

The `Know-Do' gap in primary health care in India by Jeffrey S. Hammer in NIPFP YouTube Channel.

Smugglers Secretly Repairing Russian Roads to Boost Business in The Moscow Times, 22 August.

Contrary reading of law by regulator by Somasekhar Sundaresan in The Business Standard, 22 August.

Nigerian startups can't raise money through crowdfunding because of antiquated laws by Yomi Kazeem in The Quartz, 19 August. India has a similar innovation-unfriendly financial regulatory environment.

Why inflation targeting works by Rajeswari Sengupta in The Mint, 16 August.

Use the Web instead by Ruben Verborgh on Ruben Verborgh's blog, 05 August.

How a new source of water is helping reduce conflict in the Middle East by Rowan Jacobsen in The Ensia, 19 July.

Saturday, September 03, 2016

Design of the Indian GST: Walk before you can run

by Satya Poddar and Ajay Shah.

A previous article, Sequencing in the construction of State capacity: Walk before you can run argues that in public administration, we should first reach for a modest objective, i.e. a low load, and build sound public administration systems, i.e. adequate load bearing capacity. Only after the systems have been proven to work at a low level of load should we consider increasing the load.

In building tax administration, the load is defined by (a) The tax rate and (b) The complexity of the tax in its very design - e.g. a sales tax is easier than an income tax. If the tax rate is low, the employee of the tax collection agency has a greater incentive to collect the tax. When the tax rate is high, there is a greater temptation to just take a bribe instead. If the tax system is simple, there is reduced discretion at the front line, and thus reduced rent-seeking.

In places like the UK, where there is high State capacity, income tax began at low income tax rates. When Pitt the Younger started the income tax in 1798, the peak rate was 10%. This gave them an opportunity to build sound tax administration under conditions of low load. Once this was done, the road to higher tax rates was available. In similar fashion, Singapore started with a GST rate of 4%, and then went up to 7%. The Japanese GST rate was also 3% at inception, and has now been moved up to 8%. In India, we never made the tax administration work at low rates of tax; premature load bearing was attempted by jumping to high tax rates without adequate load bearing capacity in the form of a well designed tax administration.

A standard debate in tax policy is about the choice between a low rate and a wide base versus higher rates applied on a smaller base. The traditional economics argument has been that the distortion associated with a tax goes up as the tax rate squared, so for a given level of tax revenue we are better off with a low rate and a wide base. A simple tax system with low rates will help lower the extremely large value for the Indian Marginal Cost of Public Funds. The argument presented here gives us one more perspective on the problem. Low tax rates are a low load from a public administration point of view; until load-bearing capacity has been created, it is unwise to subject the system to high load.

There is an interesting tension here between two different ways to make the load smaller. A lower rate requires a large base. The wider base involves a bigger tax administration machinery, and a larger number of transactions. A large number of transactions induces a greater load. But a higher tax rate changes what is at stake and increases the load substantially.

By this reasoning, the way forward on building a sound framework for tax administration is:

  1. First, design a very simple tax policy (e.g. a single rate comprehensive GST) with low discretion at the front line employees, so as to keep the load low. At first, set very low tax rates, to reshape the incentives of citizens and tax officials, to keep the load upon public administration low.
  2. Build and run a tax administration which is able to deliver sound tax revenues under these conditions. E.g. a 5% comprehensive VAT rate should generate VAT collections of near 3% of GDP. This requires sophisticated thinking about tax administration.
  3. Use independent private studies (e.g. comprehensive audits of some persons) and perception studies to measure the extent to which bribes are paid instead of tax.
  4. Only after this is working well, consider moving up to higher tax rates and/or a more complex tax policy. 

Implications for GST design

How can a GST be designed so as to have low load? If we wanted to walk before we run, how would we design the GST?

  1. A low single rate of 12-15%. Multiple rates significantly increase the workload.
  2. A single rate and comprehensive base, which simplifies the workflow and reduces discretion and eliminates classification disputes.
  3. Centralised registration. State-wise registration administration work load, and compliance burden for taxpayers manifold - 36 times for those who have to register in all of the states and union territories.
  4. Automatic refunds of excess credits, without discretionary approval by officials.
  5. Eliminate the concept of self-supplies within a legal entity, as the number of transactions increases several fold if self-supplies are made taxable. No supply should be reckoned unless there is another person to whom a supply can be made.
  6. The system of penalties and assessments needs to be simple, with a bias in favour of low discretion and low penalties. 

There has been a lot of focus on the `revenue neutral rate'. One twist on this is that the government is a significant buyer of goods and services. Thus the `budget neutral rate' would be a bit lower than the revenue neutral rate. This makes it possible for the rate to be lower when compared with the conventional analysis.

Single registration is a subject of some debate. Even when each state has its own GST law, it is very much possible to have single registration. The law would impose the tax on taxable supplies made in the state, allow input tax credits, and specify reporting obligations for information. These provisions will apply to any person registered in the country. There need not be the requirement of separate registration in each state. Computations of tax and reporting of the information could be on a single return with state-wise annexures. The key difference between state-wise and central registration would be that all of the state-wise compliances would be on a single registration portal, and the person will be treated as a single person (note that under the current Model law, each registration number is treated as belonging to a different person). This is how the Canadian GST operates, i.e., with single registration, but with multiple federal and provincial GST laws.

Does GST implementation require single control? We think single control is neither desirable nor feasible. Scrutiny and audits at the state level will necessarily require information on the dealer on a Pan India basis, which individual states would not have. Both the Centre and States would want to monitor compliance with their respective tax laws. If they want autonomy in administration of the GST, what is needed is a harmonisation agreement to avoid duplication of administrative effort and inconsistent policies across the country. For example, the governments should agree on a common rulings and interpretations authority, and common administration guidelines. A clean solution would be to have a common audit and scrutiny function that is jointly staffed by Centre and State officials. Some 12 States have already opted for a full-service model of GSTN, under which even scrutiny and audit would be done by GSTN.


The 122nd Amendment is a great step forward. It opens the possibility that India will become one country, one market. At present, tax administration in India works poorly. We do not know how to build a capable and uncorrupt tax administration. In the absence of this State capacity, we should start with a GST design that imposes a low load upon tax administration. Only after this is proven to work at high levels of probity and operational efficiency can we consider the possibility of going up to higher levels of load. This concept can be expanded to all of the GST in all of the States. To keep the load low, we need to expand the Prime Minister’s vision of One India, One Tax, to “One India, One Tax, One registration, and One Rate”.

Satya Poddar is a senior tax advisor with Ernst & Young in India. Ajay Shah is a researcher at the National Institute for Public Finance and Policy.

Thursday, September 01, 2016

Where will production take place in a robot-intensive world?

by Ajay Shah.

Vivek Wadhwa has an article in Quartz on China's difficulties in a robot-heavy world. Earlier this year, there was news about Foxconn replacing 60,000 workers by robots.Vivek Wadhwa says:

  • Shipping costs to the US go down when goods are made closer to the US. Today the supply chain is: Global raw materials -> China -> US. Instead it can be Global raw materials -> US.
  • The skills required to run a robot-intensive factory are greater than the skills required to do low-end manufacturing using humans.
  • Hence, a lot of robotic manufacturing will return to the US.

I agree with this. Similar things are going on with services production also, as improvements in artificial intelligence take work away from the cheap Indian BPO. There are three more perspectives that should be brought into this line of thought.

Three more reasons for robot-intensive manufacturing to favour production in mature market economies

1 Safety of expensive physical assets is a concern. A person who places vast physical assets into a certain location worries about expropriation risk. The investment in a factory can go bad owing to regime change, outbreaks of anarchy, unfair changes in taxation, imposition of capital controls, etc. China is a greater risk. Placing manufacturing in developed countries is safer. I am reminded of the vast Reliance facility in Jamnagar, which is partly about going as close to the crude oil of the Middle East as possible, but avoiding the political risk of the Middle East.

2 Cost of capital. When manufacturing becomes highly capital intensive, the cost of equity and debt becomes more important. Developed countries have mature financial systems where the cost of capital is low. Right now, the cost of capital is extremely low in developed countries as the policy rate is near zero. It is attractive to finance yourself in USD, manufacture in Oregon, and earn cashflows in dollars. Conversely, countries with capital account restrictions, such as China or India, will find it more difficult to attract investment as the cost of capital in these places is higher.

3 Cost of electricity. Firms like Google and Apple have placed data centres near hydel power in Oregon. Data centres, which consume a lot of electricity and require very few workers, are perhaps at the forefront of what robot-intensive manufacturing will be. There are many places in developed countries where there is reliable and cheap access to renewable energy. These would be ideal locations to place large scale robot-heavy factories. (They would need good infrastructure of transportation and communication also).

By this logic, there are five reasons why robot-intensive manufacturing will be attracted to developed economies instead of a place like China: (1) Reduced costs of transportation; (2) Skill intensity which requires a superior workforce; (3) Expropriation risk for a big block of $K$; (4) Cost of capital on a big block of $K$; (5) Cheap renewable energy.

I'm reminded of an earlier article on the economics of cloud computing from an Indian perspective, and the developments in that industry give us some insight into the new world of robot-intensive manufacturing.

Implications for China and India

These developments induce depreciation in the existing Chinese capital stock. There is a lot of $K$ in China which is oriented around the old ways of manufacturing. The market value of that $K$ will go down. This is similar to the dimunition of the capital stock of a country which comes about when trade liberalisation takes place, and a lot of the old factories are now worth less.

In China and in India, there is a low-skill middle class that got jobs in manufacturing or in BPO. These two kinds of jobs are threatened by improvements in artificial intelligence and robots. Millions of people who have got this prosperity for the first time will be unhappy. In both cases, their unhappiness could be exploited by messages of nationalism and religion.

How should a country like India compete in this world? Let's think about each of the five channels of influence: (1) Reduced costs of transportation to consumers in developed markets; (2) Skill intensity which requires a superior workforce; (3) Expropriation risk for a big block of $K$; (4) Cost of capital on a big block of $K$; (5) Cheap renewable energy.

We should respond to #1 by improving the infrastructure of transportation, and we should note that a lot of Indian firms will do outbound FDI to stay competitive in this landscape.

We should respond to #2 by building higher education.

We should respond to #3 by strengthening our foundations of liberal democracy and rule of law, with sophisticated institutional arrangements on issues like capital controls and taxation.

We should respond to #4 by doing inflation targeting, removing capital controls and ending financial repression.

We should respond to #5 by undertaking reforms which improve the working of the electricity sector.

Other interesting implications

When raw materials $\rightarrow$ China $\rightarrow$ DM is replaced by raw materials $\rightarrow$ DM, this will not be good for demand for shipping.

Economists think in terms of the HMY model where a firm faces fixed costs of setting up operations near the customer, and after that it saves money on transactions costs of shipping. Under the HMY model, more efficient firms export and the most efficient firms do outbound FDI. In a world of robotised manufacturing the tension will be between placing manufacturing close to a customer (thus minimising the cost of getting goods to the customer) versus economies of scale. If there were no economies of scale, we can think of a small 3D printer facility being placed near every Amazon warehouse. The right scale of manufacturing will depend on the extent to which even with modern manufacturing, there will be powerful economies of scale.

Middle and top management in the operations of global firms is about managing the complexities of manufacturing in China. In the new world, it will be about getting raw materials to DM factories, and the construction+management of robot-heavy manufacturing. There will be reduced demand for `China hands' who know how to build production systems involving China, or `India hands' who know how to build low end services production in India.

Measuring the transmission of monetary policy in India

by Rajeswari Sengupta.

The Finance Bill, 2016 amended the RBI Act, 1934 to establish the objective for RBI (where previously there was none): an inflation target. With the enactment of this law, the RBI is committed to meet pre-announced inflation targets within a specific period of time. For long, India has faced the adverse consequences of a discretionary monetary policy (link, link). Inflation targeting (IT), if implemented successfully, will improve accountability, certainty and transparency in India's monetary policy, and help stabilise the Indian macroeconomic and financial environment.

The weak link today is the monetary policy transmission (MPT). In the absence of strong and reliable links between the policy instruments controlled by the RBI and aggregate demand in the economy, it becomes difficult to do IT. In a recent paper (Mishra, Montiel, and Sengupta, 2016), we present evidence of a weak monetary policy transmission in India.

We explore two main issues in the paper:

  1. How does India fare in the factors that affect MPT?
  2. How effective is the bank lending channel of MPT in India?

Factors affecting MPT

Changes in monetary policy instruments translate into changes in aggregate demand through three main channels: bank lending or the interest rate channel, the exchange rate channel, and the asset price channel. The effectiveness of these channels is shaped by the extent of capital controls, policy constraints on exchange rate flexibility, and the structure of the financial system.

Financial markets integration and Exchange rate regime: According to Robert Mundell's "impossible trinity", in an economy with fixed exchange rate, monetary policy loses autonomy of choice when there is high integration between domestic and international financial markets. On the other hand, under a floating exchange rate, as the degree of financial integration increases, the power of monetary policy to affect aggregate demand increases.

We show in the paper that India has a relatively closed capital account in de facto terms, compared to major emerging economies such as Argentina, Brazil, Chile, Colombia, Israel, Malaysia, Mexico, Thailand, Turkey, Russia and South Africa. The exchange rate of the Rupee is determined in the interbank market. The RBI periodically intervenes in that market, buying and selling both spot and forward dollars at the market exchange rate. The limited degree of financial markets integration and RBI's interventions in the foreign exchange market are likely to mute the exchange rate response to monetary policy.

Structure of the domestic financial system: According to Mishra, Montiel and Spilimbergo (2012), MPT works better as the size and reach of the financial system increase, the degree of competition in the formal financial sector goes up and the domestic institutional environment lowers the costs arising from financial frictions.

We present evidence in our paper that the size of the formal financial system in India, measured by conventional indicators (such as the number of bank branches scaled by population or the percentage of adults with accounts at a formal financial institution) is relatively small compared to other advanced and emerging economies. The formal banking sector does not intermediate for a large share of the economy and is highly concentrated. India lags behind advanced and emerging economies in developing its bond market. Indicators of domestic institutional environment such as rule of law, regulatory quality, control of corruption, and political stability, show that India is roughly at the global median.

This suggests that the kind of public goods on which the financial system depends (such as enforcement of property rights, accounting and disclosure standards) may not be as readily available in India as in other countries. This would make financial intermediation a costly activity, weakening the effect of monetary policy actions.

Bank lending channel of MPT

There are two stages of the transmission process in the bank-lending channel, (i) the transmission from policy instruments to bank lending rates and (ii) the transmission from bank lending rates to final outcomes such as inflation and output. We use a structural vector autoregression (VAR) model in the paper to estimate the effects of a shock to monetary policy instruments on outcome variables through the impact on bank lending rates. The VAR model captures the full dynamic interactions among all the variables of interest. Given a shock to say the policy rate, it is possible to trace out the responses of all other variables to that shock, period by period.

In India, two broad groups of instruments have historically been used by the RBI to conduct monetary policy: (i) price based instruments such as the repo rate and the reverse repo rate: these affect the cost of funds for banks, and (ii) quantity-based instruments such as the Cash Reserve Ratio (CRR) and Statutory Liquidity Ratio (SLR): these affect the supply of banks' loanable funds.

We consider the effects of four instruments in our analysis: (i) the repo rate, (ii) the average of repo and reverse repo rates (price indicator), (iii) the sum of CRR and SLR (quantity indicator), and (iv) a composite score-based indicator of monetary policy stance. The price and quantity indicators have generally moved in the same direction during our sample period of 2001 to 2014. The exception is between 2011 and 2012, when increases in the policy rates suggested a tightening of monetary policy while the quantity indicator continued to move in a loosening direction.

To address this complication, we construct a score-based indicator of monetary policy stance following Das, Mishra and Prabhala (2015). We assign scores of 0, +1, -1, respectively if there is no change, an increase, or a decrease in the values of the four monetary policy instruments in any given month during our sample period. We calculate the overall stance of monetary policy by taking an unweighted sum of the scores for the individual instruments.

We use the "benchmark prime lending rate (BPLR)" of the banking sector till June 2010 and the "base rate" thereafter. Till 2010, the BPLR determined the interest rates charged by Indian banks on different categories of loans. From July 2010, it was replaced by the average base rate charged by the five largest commercial banks. We use the seasonally adjusted headline CPI inflation as an outcome variable. Another outcome variable is the output gap measured using the Index of Industrial Production (IIP). Since IIP covers only the manufacturing sector, we interpret the results on transmission to output with adequate caution.

We motivate our choice of endogenous variables in the VAR model using a modified version of the simple, open-economy New Keynesian model developed by Adam et. al. (2016). The model consists of an IS equation, a New Keynesian Phillips curve, an uncovered interest parity condition, an interest rate pass-through equation, and a Taylor-type monetary policy rule. Consistent with this model, we estimate a VAR for India with five endogenous variables: output gap, inflation rate, exchange rate, bank lending rate and the monetary policy instrument.

Shocks to the world food and energy prices may exert important effects on inflation in India. Since India is less likely to affect world food and energy prices, these prices measured in US dollars can be considered exogenous to developments in India. So we include these as exogenous variables in some versions of our estimated VARs. This is important because to the extent that shocks to either of these variables may help predict future headline CPI inflation in India, excluding them would undermine the identification of monetary policy shocks in India.

We follow two alternative identification schemes in the paper. One in which the monetary policy variable is ordered first, reflecting the assumption that the RBI does not observe (or does not react to) macroeconomic variables within the month, but the macro variables are potentially affected by monetary policy shocks contemporaneously. In this scheme the monetary policy variable is ordered first, followed by the bank lending rate, output gap, CPI inflation and exchange rate.

In the second scheme, the RBI responds to macro variables within the month, but those variables in turn respond to monetary policy only with a lag. Monetary policy variable is ordered last in this scheme and the ordering of the other variables remains the same.


Across both identification schemes and for all four monetary policy measures, a tightening of monetary policy is associated with an increase in bank lending rates. However the effect is statistically different from zero only at the 90 percent confidence level. This suggests that there is weak evidence for the first stage of transmission in the bank lending channel.

The effect of monetary policy changes on bank lending rates is hump-shaped, with the peak effects appearing between 5-10 months in all the cases considered.

The pass-through from the policy rate to bank lending rates is incomplete. For example, an increase of 25 basis points in the repo rate, is associated with an increase in the bank lending rate of only about 10 basis points.

The effect of monetary policy changes on the exchange rate is not statistically significant for any of the four monetary policy measures used. This suggests a non-existent exchange rate channel of MPT in India.

Our results provide no support for the second stage of transmission in the bank lending channel. We do not find evidence of effect of monetary policy changes on either the CPI inflation rate or the output gap.


A low degree of de facto capital mobility, RBI's interventions in the foreign exchange market, and the structure of the financial system suggest that the exchange rate and the asset price channels of MPT have low effectiveness in India. The burden of monetary transmission is likely to fall on the bank lending or interest rate channel. We present new evidence in our paper that the bank lending channel of MPT does not work well either.

With the adoption of IT, RBI has taken a step in the right direction. The enactment of the law by itself will not achieve price stability. A strong transmission mechanism from the policy rate to aggregate demand is crucial for the successful implementation of the new monetary policy framework. The legal mandate of IT must now be used to improve the effectiveness of MPT.


Das, Abhiman, Prachi Mishra, and Nagpurnanand Prabhala (2015), The Transmission of Monetary Policy Within Banks: Evidence from India, mimeo.

Li, Bin Grace, Stephen O'Connell, Christopher Adam, Andrew Berg, and Peter Montiel (2016), VAR meets DSGE: Uncovering the Monetary Transmission Mechanism in Low-Income Countries, IMF Working Paper, No. 16/90.

Mishra, Prachi, Peter J. Montiel, and Antonio Spilimbergo (2012), Monetary Transmission in Low-Income Countries, IMF Economic Review, 60, 270-302.

Mishra, Prachi, Peter J. Montiel and Rajeswari Sengupta (2016), Monetary Transmission in Developing Countries: Evidence from India, IMF Working Paper, No. 16/167.

Rajeswari Sengupta is a researcher at the Indira Gandhi Institute for Development Research, Bombay.