Monday, September 19, 2016

Interesting readings

An impressive policy paper: Self trading is not synonymous with market abuse by Nidhi Aggarwal, Chirag Anand, Shefali Malhotra, Bhargavi Zaveri, 29 June 2015, and an impressive response by SEBI today: Sebi reverses stand on self-trades by Jayshree P. Upadhyay in Mint.

Finance for the poor: policy and not programs by Ajay Shah in The Business Standard, 19 September.

Stopping the clock going back in Singur by Ashok K Lahiri in The Business Standard, 15 September.

SEBI's Flawed Attempt at Setting the Frequencies Right by Kanad Bagchi in The Wire, 15 September.

Accountability of justice by K. Parasaran in The Indian Express, 14 September.

Top Russian anti-corruption official had $120M in cash in his apartment by Cory Doctorow in The Boing Boing, 11 September. This reminds us about the fallacy of searching for something like a Lok Pal who will solve all our problems. This also made me find out the Weight of a million dollars by Richard-ga in Google Answers., 05 Feb 2006, and brings a new perspective to illicit money movements in the context of Faulty tradeoffs in security, 10 May 2010.

A note for Dr Patel by Ila Patnaik in The Indian Express, 9 September.

Before amending the law by Abhinav Kumar and Sudhanshu Sarangi in The Indian Express, 9 September.

NCRB data: handle with care by K. P. Asha Mukundan in The Hindu, 9 September.

Air pollution cost India 8.5% of its GDP in 2013: study by Dipti Jain in The Mint, 9 September.

Tariffs Do More Harm Than Good at Home by Maurice Obstfeld in The iMF direct Blog, 8 September.

The need for a 'totaliser' revolution by Sanjay Kumar in The Hindu, 7 September.

China's Summer of Discontent by Elizabeth C. Economy in The CFR Blog, 6 September.

Not majority vs minority by Pratap Bhanu Mehta in The Indian Express, 6 September.

Vaccination isn't just for babies by Sujata Kelkar Shetty in The Mint, 5 September.

No harm in pre-consultation by Somasekhar Sundaresan in The Business Standard, 5 September.

The house is on fire! by Gary Saul Morson in The New Criterion, September 2016.

Myths and realities about America's infrastructure spending by Edward L. Glaeser in The City Journal, Summer 2016.

Aadhaar by Numbers by Sunil Abraham in NIPFP YouTube Channel.

Academic snobbery about top journals and top universities is under serious attack. Even without retractions top journals publish the least reliable science in The Bjoern Brembs Blog, January 2012. Why might this be happening? The top journals probably encourage research that is more conscious of fashion, does more p-hacking, with authors who do more social engineering. Also see.

How to test your decision-making instincts by Andrew Campbell and Jo Whitehead in The McKinsey Quarterly, May 2010.

Friday, September 16, 2016

Arriving at the correct value of the rupee

by Ajay Shah.

A recent front page story in the Indian Express came as a surprise examination for many economists in India. When currency policy is proposed, four ideas are useful:

  1. Nobody knows what is the correct exchange rate. Asking a government official the correct price of the rupee is as pointless as asking him the correct price of steel or the correct level of Nifty.
  2. We were once in a complicated world where RBI openly said that it had no framework. RBI governors heard pleas from importers and exporters, played favourites, and earned political capital. That period (1934-2015) is now behind us. Now, for the first time, RBI is accountable. It has an objective: inflation. The instrument (control of the policy rate) is used up in giving us the outcome (4% inflation).
  3. Chasing an exchange rate objective can lead to small problems (e.g. the exchange rate management of 2002-2007 kicked off an inflation crisis from 2006) or big problems (the rupee defence of 2013). Wisdom in public policy involves avoiding such adventurism.
  4. While an inflation targeting central bank should not pursue exchange rate policy, the exchange rate is an important input for an inflation targeting central bank. Changes in the exchange rate feed into domestic inflation through the price of tradeables. Thus, changes in the exchange rate are a useful input for forecasting inflation. The essence of good monetary policy is forecasting inflation [example]. RBI should consume the exchange rate, made by the market, as an input into its monetary policy process.

Thursday, September 15, 2016

Fiscal consequences of shifting an inflation target from 2% to 4%

by Ajay Shah.

Most advanced economies have a nominal anchor for monetary policy in the form of an inflation target at 2%. This has presented difficulties when the policy rate hits 0%. This calls for using a new and more unpredictable tool -- quantitative easing -- or finding ways to force the short rate below zero. Both are difficult.

Some people are proposing that the inflation target should be raised to 4%. This possibility is being posed as a choice between two unpleasant things. On one hand, the smooth working of the economy will be impeded by higher inflation, but on the other hand we have to deal with the zero interest rate lower bound. Ben Bernanke's recent blog article is an example of this debate.

In addition to these arguments, there is a fiscal perspective that needs to be brought on the table.

Suppose we suddenly raise the inflation target from 2% to 4%. Suppose there is no disruption, everything works out smoothly. In the ideal scenario, the yield curve should parallel shift up by 200 bps at all maturities.

This would be bad news for persons holding nominal bonds issued by the government, persons holding nominal pensions, nominal bonds issued by private corporations, etc.

A person who has a nominal pension backed by a corporation will be angry about it. But there will be nothing she can do about it. Persons who hold claims upon the government would not accept these losses lying down. They would organise themselves politically and ask for compensation for the losses they would face if such a decision were taken.

How large are the magnitudes? Suppose a country has explicit nominal government bonds and implicit nominal pension debt adding up to 100% of GDP. Suppose this has an average maturity of 10 years. The 200 bps parallel shift of the yield curve would impose a loss of 20% which works out to 20% of GDP. There is no democracy in which monetary policy wonks are going to be able to impose a cost of 20% of GDP upon some people without a political fight. A negotiation would take place where the adversely affected persons will ask for compensation.

This negotiation will be a difficult one. As an example, envision the US Treasury, the US Fed, and bondholders sitting in a room arguing about 20% of GDP. Things become more difficult in countries where the government owes nominal defined benefit pensions.

If the negotiation works out smooth and clean, the debt/GDP of the country goes up by 20 percentage points. This will make bondholders and credit rating agencies more nervous about the fiscal solvency of the country. While some countries (e.g. Australia) have good fiscal health, most advanced economies do not.

The last and most troublesome issue is that of credibility and confidence. Many advanced economies have a difficult fiscal situation, particularly when off-balance-sheet liabilities are counted. The bond market has generally been quite well disposed towards these countries; e.g. the bond market assumes the US will solve its fiscal crisis, even though nobody can see how this would be done. One key element of this confidence on the part of the bond market is: trust in the 2% inflation target. As fiat money is anchored with a 2% inflation target, the fiscal authority cannot inflate away debt by using inflation surprises. This reassures bond holders who are then willing to lend money to the sovereign at low interest rates.

Suppose the negotiations associated with the increase in the inflation target don't work out well. Some bondholders walk away feeling they were unfairly forced to accept a loss. There will be less trust the next time around. The bond market will not trust the 4% inflation target in the way it has come to trust the 2% inflation target. It will demand a risk premium in exchange for bearing the risk that the institutional mechanism of monetary policy is not trusted for decades and decades to come.

For some advanced economies, under certain kinds of mishandled negotiations, the project of trying to raise the inflation target from 2% to 4% could lead to a sharp one-time increase in the debt/GDP ratio and a higher required interest rate for government debt. These two outcomes could significantly worsen the fiscal situation for the government.

These considerations should be brought into the picture when evaluating the costs and benefits of raising the inflation target from 2% to 4%.

On related issues, this article from June 2009 has worked out reasonably well. One change that intervened was that the US moved closer to formal inflation targeting in 2012, thus removing some of the concern.


I acknowledge useful discussions with Josh Felman on these issues.

Saturday, September 10, 2016

The great Indian GDP measurement controversy

by Rajeswari Sengupta.

In 2015, the Central Statistical Office (CSO) revised the way GDP is calculated in India. According to the new series, India is the fastest growing large economy in the world, with a 7.1 percent real growth rate. Other trusted measures of the state of the economy convey a discordant picture. This discrepancy has led to an active debate comprising two parts. One part of the debate has been arguments about the extent to which the official GDP data are accurate. The second part of the debate is based on criticising CSO's methods.

This article summarises the literature that examines CSO's methods. There are two main areas of concern: the way manufacturing Gross Value Added (GVA) has been estimated and the methodology for calculating deflators.

Manufacturing GVA


The manufacturing sector has been at the centre of the GDP debate. The methodological changes for this sector and consequently the data revisions have been substantial. Manufacturing growth for 2012-13 was revised up from 1.1 percent to 6.2 percent, while that for 2013-14 was increased from -0.7 percent to 5.3 percent. Various authors (Nagaraj, 2015a, 2015b, 2015c; Rajakumar, 2015; Nagaraj and Srinivasan, 2016; Sapre and Sinha, 2016) have questioned the reliability of the new estimates, on several grounds.


Enterprise vs. Establishment approach: In a major innovation, the new GVA methodology shifted data collection from establishments (or factories) to enterprises (or firms). Sapre and Sinha (2016) point out that lack of clarity on measures of output and costs at the enterprise level can lead to imprecise estimates of GVA. The activities of firms can be much more diverse than those of factories, and not all of these functions would qualify as manufacturing. Yet all the value added of enterprises classified as "manufacturing firms" has gone into the calculation of manufacturing GVA. This will inflate the level of output and possibly also the growth rate, if the ancillary activities are growing faster than the manufacturing ones.


Blowing up of GVA: Extrapolating from samples ("blowing up") is not a new feature of the current GDP series. What has changed is the database used. Previously, manufacturing GVA was based on the RBI's fixed sample of large private companies. Under the new series, the MCA21 database is used to compile a set of "active" companies, which have filed their annual financial returns at least once in the past three years. The problem is that for any given year, information from several active companies remains unavailable till a cut-off date of data extraction. In such a case, the GVA of available companies needs to be blown-up to account for the unavailable companies. There are multiple issues in this blowing-up method.

The year wise number of available and active companies in manufacturing is not publicly available. Hence, year on year, the exact number of companies for which the GVA is blown up is unknown.
While the Ministry of Corporate Affairs has made filing of annual financial returns mandatory for all registered companies, it is not known how many of these companies produce any output on a regular basis.

The blowing-up factor is the inverse of the ratio between the paid-up capital (PUC) for the available companies and that for the active set as a whole. Nagaraj (2015a, 2015b) argues that this is inappropriate since a large fraction of the MCA21 active set are "fictitious, shell companies" that exist only on paper. In that case the blowing-up method is likely to overestimate GVA.

Sapre and Sinha (2016) argue that blowing-up using the PUC is an inappropriate method because PUC and GVA do not have any one-to-one relation. Also, it is possible that the actual GVA of some "active but unavailable" companies is negative for a particular year. In those cases, blowing up of GVA using the PUC factor method can lead to overestimation.

The actual computation of the blowing-up factor applied by the CSO in the new series has not been described in detail in the official documents. This makes it difficult to replicate the process and analyse it.

A single blowing-up factor has been used for private as well as public limited companies. Rajakumar (2015) points out this is not appropriate as the two groups are widely divergent in their patterns.

The number of "available" companies reporting their annual financial returns with MCA varies across the years. As a result, the blowing-up factor that accounts for the non-reporting companies will also vary from year to year. As highlighted by Nagaraj (2015a, 2015b) and Sapre and Sinha (2016), this variation will result in wide fluctuations in the final GVA estimates.


Identification of manufacturing companies: Sapre and Sinha (2016) find that within the manufacturing sector several companies operate as wholesale traders or service providers. These companies may have changed their line of business since they were originally registered. These changes do not get reflected in the Company Identification (CIN) code assigned to the companies. Such misclassification of companies will distort the manufacturing estimates, although not the overall GVA.


MCA 21 vs. IIP : There are other problems with the manufacturing GVA calculation that have not been written about much. For the manufacturing sector, the GVA is derived from a combination of MCA 21 numbers, Index of Industrial Production (IIP) estimates and estimates of the unorganized sector from the Annual Survey of Industries (ASI). While the MCA21 is a new database, the base year for the IIP data is still 2004-05. Also the data obtained from MCA 21 follows an "enterprise" approach as mentioned earlier, but the data obtained from ASI follows the old "establishment" approach. The full implications of these discrepancies are yet to be fully understood.


Deflators


Previously, estimates of real GDP relied heavily on production indices such as the IIP. Now, most real numbers are derived by taking nominal data and deflating them by price indices. If done well, this approach can give a more accurate measure of value added. But if the deflators used are inappropriate, the estimated real magnitudes will be distorted. And this may well have happened in the past few years, since there have been very large changes in relative prices (especially petroleum and other commodities), which are inherently difficult to capture in aggregate deflators. The issues here are as follows.


Double deflation: In most G20 countries, real manufacturing GVA is computed using a methodology known as double deflation. In this method, nominal outputs are deflated using an output deflator, while inputs are deflated using a separate input deflator. Then, the real inputs are subtracted from real outputs to derive real GVA. But in India things are done differently. Here, we compute the nominal GVA, and then deflate this number using a single deflator.

If input prices move in tandem with output prices, both methodologies will give similar results. But if the two price series diverge- as they have for the past few years in India- single deflation can overstate growth by a big margin.

The reason is not difficult to see. If the price of inputs falls sharply, profits will increase, and nominal value added will go up. Since real GDP is supposed to be measured at "constant prices", this increase needs to be deflated away. Double deflation will do this easily. But single deflation will not work. In fact, if a commodity-weighted deflator like the Wholesale Price Index (WPI) is used, as is the case under the current methodology, nominal growth will be inflated, on the grounds that prices are actually falling! In this case, real growth will be seriously overestimated. A fuller explanation is provided here.

As the gap between input and output inflation starts to close, the problem will diminish. But that could also send a misleading signal, because it might seem that growth is slowing, when only the measurement bias is disappearing.


Service sector deflator: Deflator problems also plague the estimates for the service sector, which accounts for the bulk of GDP. Currently, the deflator used for much of this sector is the WPI. But the weight of services in the WPI is negligible. If instead the services component of the Consumer Price Index (CPI) were used, growth in this sector would be far lower than currently estimated.


WPI vs. CPI : Finally, there are questions about whether the WPI should really be used as a deflator, at all. The weights are now more than a decade old, and India's economic structure has changed radically over this period. In addition, the sample frame (the selection of goods sampled) is also out of date. The CPI is a better price index [link, link].

Potential refinements


Based on the foregoing, a number of refinements to the GDP methodology could be considered:

  • Releasing disaggregated information on firm output and cost items, to permit more precise estimation of manufacturing GVA given the shift from the establishment to the enterprise approach.
  • Altering the definition of the active set of manufacturing companies, to ensure the companies are truly active.
  • Releasing the number of active and available companies every year by industry or sector, to get a sense of the companies contributing to GVA.
  • Shifting the blowing-up factor from paid-up capital to another indicator, such as replacing growth rates for "active but unavailable" companies by the overall growth rate for the relevant subsector.
  • Using separate blowing-up factors for public and private limited companies. Currently the blowing-up factor does not take into account the size, industry or ownership of the unavailable companies.
  • Reviewing the classification of companies to ensure they are categorized appropriately.
  • Providing greater clarity and transparency about the database and methodology used to estimate the manufacturing sector GVA. Also, documents could be released explaining the precise method used to blow up the GVA estimates.
  • Adopting the double deflation method to calculate real manufacturing GVA.
  • Using the relevant CPI components to deflate service sector GVA.
  • More generally, the WPI could be replaced by the relevant CPI components, in the long period before a Producer Price Index (PPI) is developed which would be an ideal deflator.

Until this methodological debate subsides, official GDP data should be used with caution as it may not accurately reflect conditions in the economy. Other proxies for output are required.

Acknowledgements


I thank Josh Felman, Deep Mukherjee, R. Nagaraj, Amey Sapre, and Pramod Sinha for useful conversations

References


Nagaraj, R. (2015a), Seeds of doubt on new GDP numbers Private corporate sector overestimated?, Economic and Political Weekly, Vol. L, No. 13.

Nagaraj, R. (2015b), Seeds of doubt remain: A reply to CSO’s rejoinder, Economic and Political Weekly, Vol-L, No. 18.

Nagaraj, R. (2015c), Growth in GVA of Indian manufacturing, Economic and Political Weekly, Vol-L, No. 24.

Nagaraj, R. and T.N. Srinivasan (2016), Measuring India’s GDP Growth: Unpacking the Analytics & Data Issues behind a Controversy that Refuses to Go Away, India Policy Forum, July 2016.

Rajakumar, J Dennis (2015), Private corporate sector in new NAS series: Need for a fresh look, Economic and Political Weekly, Vol-L, No. 29.

Sapre, Amey, and Pramod Sinha (2016), Some areas of concern about Indian Manufacturing Sector GDP estimation, NIPFP Working Paper 172, August 2016.



The author is a researcher at the Indira Gandhi Institute of Development Research.

Author: Rajeswari Sengupta

Rajeswari Sengupta is a researcher at the Indira Gandhi Institute of Development Research.

Home page.

On this blog:
  1. The great Indian GDP measurement controversy, 10 September 2016.
  2. Measuring the transmission of monetary policy in India, 1 September 2016.
  3. Analysis of the recent proposed SARFAESI amendments : are these consistent with the Insolvency and Bankruptcy Code?, 29 May 2016.
  4. Firm insolvency process: Lessons from a cross-country comparison, 22 December 2015.
  5. Bankruptcy reforms: It's not the ranking that matters, 13 November 2015.

Friday, September 09, 2016

UIDAI's 4th big public policy innovation: Build forts, not empires

by Praveen Chakravarty.

The insightful paper, and accompanying blog article, by Ram Sewak Sharma about three big innovations in UIDAI got me thinking about my own UIDAI experience. What were the key innovations which made it work out well, especially as viewed from the lens of the private sector? What lessons can we take away? In addition to his three big ideas, I have one more.

`Asset light business' is the new buzzword among investors, especially venture capital and angel investors. The world’s largest taxi company owns no taxis – Uber; the world’s largest room provider owns no hotel rooms – AirBnb; the world’s largest movie house owns no movies – Netflix. This is the popular refrain among the fraternity of modern day startups and their cheer-leading investors. In a similar vein, it can be argued that the world’s largest identity provider owns no identity devices! To top, this was not a traditional start-up in a college dorm by a 20 year old. This is the Unique Identity Authority of India – UIDAI, a staid authority of the staid government of India manned by staid bureaucrats.

On 26 March 2004, Bharti Airtel announced a large, first-of-its-kind outsourcing contract with IBM. It essentially meant that Bharti Airtel, the telecom player will own no telecom equipment nor network hardware or software. It will simply acquire and own customers. IBM, in turn, took on the responsibility of managing all the complex hardware and software required to run a massive telecom operation. To add, IBM was to be paid a percentage of revenues that Bharti Airtel would earn, not a mere fat, flat fee as was the prevailing norm then. Bharti Airtel grew from a 6.5 million subscriber base to 250 million subscribers in 12 years, leaving most of its competitors behind. It is undeniable that this brave decision of Bharti Airtel to smart source its capital expenditure to IBM played a key role in its ability to scale so rapidly, which is probably forgotten in the annals of Bharti Airtel’s success.

It was then dubbed the `capex to opex' transformation by financial analysts such as myself, i.e. converting big sunk costs of capital expenditure to revenue generating operating expenditure, marking significant gains in efficiency and scale. When the cost of capital is high in India owing to capital controls, there is a natural gain when an Indian firm, which suffers from the elevated cost of capital in India, contracts-out the ownership of bulky capital assets to an MNC, which enjoys global levels of cost of capital.

When UIDAI embarked on providing a unique identity to a billion Indians across more than 6.5 lakh villages, the sheer scale was daunting. As the Ram Sewak Sharma paper rightly mentions, there was detailed thought behind the use of iris, field trials to test proof of concept etc. But perhaps the single biggest catalyst in converting this from a grandiose plan into reality was the decision of UIDAI to smart source identity data collection. Surely, the technology industry background of the founding Chairman of the UIDAI played a role in its decision to do a Bharti Airtel in public policy. Nevertheless, in hindsight, this decision to embrace the ‘capex to opex’ theme but adapt it to the Indian public policy environment, in my view, laid the ‘Aadhaar’ for Aadhaar to scale so rapidly.

In an otherwise typical government project, offices would have been set up in every district, personnel would have been hired, biometric scanners would have been purchased and then identity information would have been collected, all by a government body or a clutch of bodies. This would have meant incurring massive upfront capital costs of infrastructure, technology and people. The UIDAI instead tilted it on its head and decided to build an entire ecosystem of private vendors to do the data collection with costs of machine, people and infrastructure borne by the vendor. Essentially, this meant that there were ubiquitous but authorized and approved UIDAI data collection centres and camps that mushroomed all across the country in a short span of time which made it easy for residents to register. But in the public policy world unlike the corporate world, protecting downside risks are far more important than any potential upside gains, i.e. protecting data security of Indian residents is infinitely more important than any efficiency gains of outsourcing to private vendors. This was achieved by establishing strict oversight and control mechanisms that rested entirely with the UIDAI.

The UIDAI exercised stringent control of data encryption and validation. So, while biometric data of a billion Indians were collected by thousands of independent government and private agencies, all of them collected the data through a standardised software provided by the UIDAI that encrypted the data which was then sent back for validation to the UIDAI centre. The UIDAI incurred a cost of roughly Rs.65 for every successful biometric data of an individual. Thus, the UIDAI did not have to put up big financial plans and wait for funding from the Ministry of Finance before it could launch its activities across the country.

This was one of the biggest reasons that UIDAI could go from zero to 600 million unique identities in 4 years flat, perhaps the fastest of any government or even private sector initiatives in recent times anywhere in the world. This was the power of the `capex to opex' or `asset light' innovation in policy implementation. This philosophy of the UIDAI in eschewing the temptation to `build empires' but to `build forts' instead is a philosophy that can serve many large scale government project implementations well.

We in India are gradually developing knowledge about how to build State capacity. Computer technology allows us to leapfrog, and achieve remarkable kinds of State capacity which were otherwise unavailable at our level of per capita GDP. We are crossing this river by feeling the stones. We are building experience, and we are building a literature. A paper by Ajay Shah in 2006, the Tagup report, Nandan Nilekani's book from 2013, the book by Nandan Nilekani and Viral Shah from 2016, the concept of the Financial Data Management Centre (FDMC) in the Indian Financial Code, the concepts of information utilities in the bankruptcy reform, the Ram Sewak Sharma paper, and my one idea in this article: these add up to the emergence of deeply grounded local knowledge on how to do this. These materials are great fodder for thinking, debate, and then actually doing things in India.


The author is Senior Fellow in Political Economy at IDFC Institute, a Mumbai think tank, and former consultant with the UIDAI.

Thursday, September 08, 2016

UIDAI's public policy innovations

by Ram Sewak Sharma.

Unique Identification Authority of India (UIDAI) had the goal of issuing unique identification numbers to every resident of India. In a country as large as ours, this was a difficult task to achieve. UIDAI has largely accomplished this within a short period of about six years. I believe it was able to do this only because it took many innovative and bold decisions. In a recent paper I examine some of these innovations. The paper also tries to derive lessons from UIDAI that could be applied in other government projects.

The Use of Iris Scans


The UIDAI felt that unless iris images were used in addition to fingerprints, it would not be able to fulfil its mandate of unique identification. However, there were many concerns related to the use of iris images. Was this technology mature enough? Was it too expensive? Were there enough vendors in the market to prevent lock-in?

The UIDAI set up a committee to deliberate on the issue of which biometrics to collect and what standards to use for unique identification. This committee recognised the value of using iris images in improving accuracy. However, it fell short of recommending the inclusion of the iris in the biometric set and left the decision to UIDAI.

After a detailed examination, the UIDAI came to the conclusion that the inclusion of iris to the biometric set was necessary for a number of reasons, such as ensuring uniqueness of identities, and achieving greater inclusion. In retrospect, this turned out to be one of the most important decisions of the UIDAI.

On-field trials


The practice of conducting on-field trials was an important innovation. When UIDAI began its mission, there were many questions inside and outside the organisation on whether the very idea of unique identification for every resident was feasible at all. The idea of using biometrics to ensure the unique identification and authentication of all residents in India was an untested one. There were many assumptions behind it, and the data required to test the validity of these assumptions was not available. For instance, most of the research done on using biometrics for identification or authentication was done in western countries, and that too, on relatively small numbers of people.

The knowledge which had been produced by Western researchers was not applicable in the Indian context. Could the fingerprints of rural residents and manual labourers be captured successfully, or would they be excluded from Aadhaar? What about the iris images of old or blind people? Do the devices available in the market serve the purpose? What would be the most efficient and effective way to organise the process of enrolment? These questions needed to be answered if the project was to be successful.

The strategy adopted at UIDAI was to conduct a set of trials (called Proofs of Concept, PoCs) in several states across the country. The areas were selected to be representative of real-life enrolment and authentication. A number of biometric capture devices of different makes were used, and several different enrolment processes were tried out. The PoCs were carefully designed to answer sharply articulated questions, either to verify UIDAI's assumptions, or to capture the data required to fill in gaps in the UIDAI's knowledge. In essence, the scientific method was applied to create the knowledge that was pertinent to the decisions that had to be made at UIDAI. Resources had to be allocated to this work, and in return for that, major sources of project risk were eliminated.

The results of the PoCs indicated that the major hypothesis of the UIDAI was correct: that it was indeed possible to capture biometric data that was fit for the purpose of deduplication and verification. The results also showed that iris capture did not present any major challenges. An efficient enrolment process was devised using the data captured during these trials.

Competition


The last innovation considered in the paper relates to competition. Given the scale and importance of the project, the UIDAI felt it was important to increase efficiency and reduce costs by leveraging the competencies available in the private sector. At the same time, it was also essential to avoid a situation where any one private player could exercise significant power over the effective functioning of the Aadhaar system: the Authority wanted to ensure that there was a competitive market for providing services to it. To promote such a competitive market, the Authority used a two-pronged strategy of using open standards (creating standards where there were none), and using open APIs (Application Programming Interfaces).

The Authority used this strategy in procuring vendors for deduplication. Algorithms for deduplication had never been tested at the scale required in this project. To reduce the risk of poor quality deduplication, the UIDAI came up with a novel solution. It decided to engage three biometric service providers (BSPs), instead of just one. These BSPs would interface with the UIDAI systems using open APIs specified by the Authority. This decision helped avoid vendor lock-in, and increased scalability.

The UIDAI selected the three top bidders on the basis of the total cost per deduplication. Even after these three vendors were selected, the Authority was able to set up a competitive market among them, using an innovative system to distribute deduplication requests among them. Vendors were paid on the basis of the number of deduplication operations they were able to carry out, and the Authority allocated operations to them on the basis of how fast and how accurate they were. This led to a situation where the BSPs were constantly competing with each other to improve their speed and accuracy.

Where standards were not present, the UIDAI was willing to create new standards in order to increase competition. At the outset of UIDAI's work, every biometric device had its own interface, distinct from the interfaces of other biometric devices. If a capture application wanted to support 10 commonly used devices, then the application developer would have to implement 10 different interfaces. This would have made it costly to bring new devices into the project, even if these new devices were cheaper and better. In order to avoid this situation, the UIDAI created an intermediate specification. Vendors could implement support for this specification, and their devices could be certified. This allowed all capture applications to work with all certified devices.

Lessons


The success of the UIDAI offers many lessons for other government projects. Perhaps the first lesson that can be drawn from it is that innovation is indeed possible within the government. Government processes need not prevent it from taking innovative decisions. In fact, processes commonly used within the government, such as expert committees and consensus-based decision-making, can provide methods to examine difficult issues in a credible manner. High-quality procurement and project management skills can help the government outsource many functions that are currently housed within it.

The paper also suggests that scale and complexity need not be deterrents to private sector participation: in fact, the large scale of government projects can make the project more attractive to private parties. Another lesson government agencies could learn from the UIDAI is the need to test major hypotheses through field trials before launching projects at scale. Conducting such field trials provides an opportunity to change the design or the implementation roadmap well in time, thus saving precious public money from being wasted.

Conclusion


The UIDAI could achieve its objective because it adopted a different approach from most government organisations. It took tough decisions, such as the one to use iris images; it expended resources on building pertinent knowledge, by constantly experimenting on the ground and learning from these trials; and it exploited private-sector competition to achieve its task at the lowest cost. It should be noted that this is not an exhaustive list of its innovations, but without these three decisions, it is unlikely the UIDAI would have been able to fulfil its mission.

Even large government projects can be done fast and efficiently. Government processes need not be obstructive. In fact, the mechanisms of bureaucracy, such as committees, adherence to financial regulations, and desire for consensus, can help to resolve difficult issues and take tough decisions. Well-designed pilots and field-tests can help the government evaluate the effectiveness of large programs, so that it can deploy public resources more usefully. High quality procurement and contract-management processes can enable the government to leverage the dynamism of the private sector to provide public goods effectively.

Acknowledgements


I am grateful to Prasanth Regy and Ajay Shah, both of NIPFP, for stimulating discussions.




The author is Chairman, Telecom Regulatory Authority of India (TRAI) and was part of the founding team at UIDAI.

Wednesday, September 07, 2016

Dating the Indian business cycle

by Radhika Pandey, Ila Patnaik, Ajay Shah.

Most macroeconomics is about business cycle fluctuations. The ultimate dream of macroeconomic policy is to use monetary policy and fiscal policy to reduce the amplitude of business cycle fluctuations, without contaminating the process of trend GDP growth. From an Indian policy perspective, this agenda is sketched in Shah and Patnaik (2010). The starting point of all these glamorous things, however, is measurement. The major barrier to doing Indian macroeconomics is the lack of the foundations of business cycle measurement.

The first milestone in this journey is sound procedures for seasonal adjustment of a large number of macroeconomic time series. At NIPFP, we have built this knowledge in the last decade, and insights from this work are presented in Bhattacharya et. al, 2016.

The next milestone is dates of turning points of the business cycle. As an example, in the US, the NBER produces a set of dates. These dates are extremely valuable in myriad applications. As an example, the standard operating procedure when drawing the chart of a macroeconomic time-series is to show a shaded background for the period which was a contraction. Here is one example: y-o-y CPI inflation in the US, with recessions shown as shaded bars. In the Indian setting, several papers have worked on the problem of identifying dates of turning points of the business cycle (Dua and Banerji, 2000, Chitre, 2001, Patnaik and Sharma, 2002, Mohanty et.al, 2003).

In a new paper (Pandey et. al., 2016) we bring three new perspectives to this question:

  1. In the older period, India was an agricultural economy, and the ups and downs of GDP growth were largely monsoon shocks. It is only in the recent period that we have got structural transformation, and the market process of cyclical behaviour of corporate investment and inventory, which add up to a business cycle phenomenon that is recognisably related to the mainstream conception of business cycles (Shah, 2008). This motivates a focus on the post-1991 period.
  2. We are able to shift from annual data to quarterly data by starting in the mid 1990s.
  3. We have the laid the groundwork for this to be a system, with regular updation of the dates, rather than a one-off paper. 

Methods


One approach to business cycle measurement focuses on ``growth cycles'', and relies on detrending procedures to extract the cyclical component of output. The cycle is defined to be in the boom phase when actual output is above the estimated trend, and in recession when the actual output is below the estimated trend. This identifies expansion and contraction based on the level of output. In contrast, the ``growth rate cycle'' identifies turning points based on the growth rate of output. For the post-reform period in India, this is more appropriate.

At an intuitive level, the procedure works as follows. First, we remove the trend and focus on fluctuations away from the trend. Second, we remove the high frequency fluctuations (below two years) and the low frequency fluctuations (above eight years). What's left is in the range of frequencies which are considered `the business cycle'. Third, we identify turning points in this series.

In terms of tools and techniques, we use the filter by Christiano and Fitzgerald. The Christiano Fitzgerald filter belongs to the category of band-pass filters. This is used to extract the NBER-suggested frequencies from two to eight years. To this filtered cyclical component, we apply the dating algorithm developed by Bry and Boschan, 1971.

Our analysis is focused on seasonally adjusted quarterly GDP series (Base year 2004-05). This series is available from 1996 Q2 (Apr-Jun) to 2014 Q3 (Jul-Sep). The CSO revised the GDP series with a new base year of 2011-12. The revised series is available only from 2011 Q2. Hence we stick to the series with old base year for our analysis.

Results


De-trended, filtered, seasonally adjusted real GDP growth

As an example, look at the period of the Lehman crisis. It is well known that the economy was weakening well before the Lehman bankruptcy in September 2008. As an example, INR started depreciating sharply from January 2008 onwards. The evidence above shows that the economy peaked at Q2 2007, and started weakening thereafter.

Each turning point is a fascinating moment. In Q2 2007, i.e. Apr-May-Jun 2007,  growth was good but the business cycle was about to turn. It is interesting to go back into history to each of these turning points and think about what was going then, and what we were thinking then.

Dates of turning points in GDP:1996-2014
Phase Start End Duration Amplitude
Recession 1999Q4 2003Q1 13 3.3
Expansion 2003Q1 2007Q2 17 2.5
Recession 2007Q2 2009Q3 9 2.3
Expansion 2009Q3 2011Q2 7 1.3
Recession 2011Q2 2012Q4 6 0.9

Our findings on business cycle chronology are robust to the choice of filter and to the choice of the measure of business cycle indicator. We conduct this analysis using different measures of business cycle indicators such as IIP, GDP excluding agriculture and excluding government, and Firms' net sales, and find broadly similar turning points. Details about these explorations are in the paper.

A system, not just a paper


This is not a one off paper. We will review these dates regularly and update the files, while avoiding changes in URLs. When the methods run into trouble with future data, we will address these problems in the methods. This work would thus become a part of the public goods of the Indian statistical system.

All key materials have been released into the public domain. In addition to a paper web page, we have a system web page which gives a .csv file with dates at a fixed URL and can be used e.g. in your R programs.

An example of an application


An example of placing recession bars on a graph, of
growth in (non-finance, non-oil) firms net sales

The graph above shows the familiar series of seasonally adjusted annualised growth, of the net sales of non-financial non-oil firms, with shaded bars showing downturns. This series only starts after 2000 as quarterly disclosure by firms only started then. Placing this series (net sales of firms) into the context of the business cycle events gives us fresh insight into both: we learn something about the sales of firms respond to business cycle fluctuations, and we learn something about business cycle fluctuations.

Facts about the Indian business cycle


It is useful to know summary statistics about the Indian business cycle: the average duration and amplitude of expansion and recession and the coefficient of variation (CV) in duration and amplitude across expansions and recessions.

Summary statistics of GDP growth cycles
Exp/Rec Average amplitude (in per cent) Average duration (in quarters) Measure of diversity in duration (CVD) Measure of diversity in amplitude (CVA)
Expansion 2.5 12.0 0.34 0.38
Recession 2.2 9.3 0.31 0.45

The average amplitude of expansion is seen to be 2.5% while the average amplitude of recession is 2.2%. The average duration of expansion is seen to be 12 quarters while the average duration of recession is seen to be 9.3 quarters. These are fascinating new facts in India. There is more heterogeneity in the amplitude of a downturn when compared with expansions.

Changing nature of the Indian business cycle


In recent decades a number of emerging economies have undergone structural transformation and introduced reforms aimed at greater market orientation. There is an emerging strand of literature that studies the changes in business cycle stylised facts in response to these changes. Studies find that business cycle stylised facts have changed over time (Ghate et.al, Alp. et.al, 2012). In the paper, we explore some of these changes.

In the post-reform period, both expansions and recessions have become diverse in terms of duration and amplitude. Some episodes of recession are relatively more deeper and severe relative to others in the post-reform period. Similarly there is considerable variation in the duration of expansion and recession across specific cycles in the post-reform period. Some are short-lived while others are relatively more persistent.

References


Rudrani Bhattacharya, Radhika Pandey, Ila Patnaik and Ajay Shah. Seasonal adjustment of Indian macroeconomic time-series, NIPFP Working Paper 160, January 2016.

Radhika Pandey, Ila Patnaik and Ajay Shah. Dating business cycles in India. NIPFP Working Paper 175, September 2016.

Ajay Shah (2008). New issues in macroeconomic policy. In: Business Standard India. Ed. by T. N. Ninan. Business Standard Books. Chap. 2, pp.26--54.

Ajay Shah and Ila Patnaik (2010). Stabilising the Indian business cycle. In: India on the growth turnpike: Essays in honour of Vijay L. Kelkar. Ed. by Sameer Kochhar. Academic Foundation. Chap. 6, pp.137--154.