Deep Learning Network Portfolio: Building a Minimally Correlated Portfolio Deploying Network Analysis

Summary

In this article, we have used hedgecraft‘s approach to portfolio management. However, unlike hedgecraft, we have used a sub-graph centrality approach.This sub-graph centrality approach is what makes our approach unique. Using insights from Network Science, we build a centrality based risk model for generating portfolio asset weights. The model is trained with the daily prices of 31 stocks from 2006-2014 and validated in the years 2015, 2016, 2017, 2018 & 2019. As a benchmark, we compare the model with a portfolio constructed with Modern Portfolio Theory (MPT). Our proposed asset allocation algorithm significantly outperformed both the Sensex30 and Nifty50 indexes in every validation year with an average annual return rate of 26.51%, a 13.54% annual volatility, a 1.59 Sharpe ratio, a -21.22% maximum drawdown, a return over maximum drawdown of 6.56, and a growth-risk-ratio of 1.86. In comparison, the MPT portfolio had a 9.63% average annual return rate, an 18.07% annual standard deviation, a Sharpe ratio of 0.41, a maximum drawdown of -22.59%, a return over maximum drawdown of 2.2, and a growth-risk-ratio of 0.63.

Background

In this series, we play the part of an Investment Data Scientist at Bridgewater Associates performing a go/no go analysis on a new idea for risk-weighted asset allocation. Our aim is to develop a network-based model for generating asset weights such that the probability of losing money in any given year is minimized. We’ve heard down the grapevine that all go-decisions will be presented to Dalio’s inner circle at the end of the week and will likely be subject to intense scrutiny. As such, we work with a few highly correlated assets with strict go/no go criteria. We build the model using the daily prices of each stock in the Sensex. If our recommended portfolio either (1) loses money in any year, (2) does not outperform the market every year, or (3) does not outperform the MPT portfolio—the decision is no go.

Asset Diversification and Allocation

The building blocks of a portfolio are assets (resources with the economic value expected to increase over time). Each asset belongs to one of seven primary asset classes: cash, equity, fixed income, commodities, real estate, alternative assets, and more recently, digital (such as cryptocurrency and blockchain). Within each class are different asset types. For example stocks, index funds, and equity mutual funds all belong to the equity class while gold, oil, and corn belong to the commodities class. An emerging consensus in the financial sector is this: a portfolio containing assets of many classes and types hedges against potential losses by increasing the number of revenue streams. In general the more diverse the portfolio the less likely it is to lose money. Take stocks for example. A diversified stock portfolio contains positions in multiple sectors. We call this asset diversification, or more simply diversification. Below is a table summarizing the asset classes and some of their respective types.

An investor solves the following (asset allocation) problem: given X rupees and N, assets find the best possible way of breaking X into N pieces. By “best possible” we mean maximizing our returns subject to minimizing the risk of our initial investment. In other words, we aim to consistently grow X irrespective of the overall state of the market. In what follows, we explore provocative insights by Ray Dalio and others on portfolio construction.

The above chart depicts the behavior of a portfolio with increasing diversification. Along the x-axis is the number of asset types. Along the y-axis is how “spread out” the annual returns are. A lower annual standard deviation indicates smaller fluctuations in each revenue stream, and in turn a diminished risk exposure. The “Holy Grail” so to speak, is to (1) find the largest number of assets that are the least correlated and (2) allocate X rupees to those assets such that the probability of losing money any given year is minimized. The underlying principle is this: the portfolio most robust against large market fluctuations and economic downturns is a portfolio with assets that are the most independent of each other.

Visualizing How A Portfolio is Correlated with Itself (with Physics)

The following visualizations are rendered with the Kamada-Kawai method which treats each vertex of the graph as a mass and each edge as a spring. The graph is drawn by finding the list of vertex positions that minimize the total energy of the ball-spring system. The method treats the spring lengths as the weights of the graph, which is given by 1 – cor_matrix where cor_matrix is the distance correlation matrix. Nodes separated by large distances reflect smaller correlations between their time-series data, while nodes separated by small distances reflect larger correlations. The minimum energy configuration consists of vertices with few connections experiencing a repulsive force and vertices with many connections feeling an attractive force. As such, nodes with a larger degree (more correlations) fall towards to the center of the visualization where nodes with a smaller degree (fewer correlations) are pushed outwards. For an overview of physics-based graph visualizations see the Force directed graph drawing wiki.

In the above visualization, the sizes of the vertices are proportional to the number of connections they have. The color bar to the right indicates the degree of dissimilarity (the distance) between the stocks. The larger the value (the lighter the color) the less similar the stocks are. Keeping this in mind, several stocks jump out. Bajaj Finance, ITC, HUL, and HeroMotoCorp all lie on the periphery of the network with the fewest number of correlations above Pc = 0.325. On the other hand ICICI Bank, Axis Bank, SBI, and Yes Bank sit in the core of the network with the greatest number connections above Pc = 0.325. It is clear from the closing prices network that our asset allocation algorithm needs to reward vertices on the periphery and punish those nearing the center. In the next code block we build a function to visualize how the edges of the distance correlation network are distributed.

Observations

  • The degree distribution is left-skewed.
  • The average node is connected to 86.6% of the network.
  • Very few nodes are connected to less than 66.6% of the network.
  • The kernel density estimation is not a good fit.
  • By eyeballing the plot, the degrees appear to follow an inverse power-law distribution. (This would be consistent with the findings of Tse, et al. (2010)).

Intraportfolio Risk

We read an intraportfolio risk plot like this: ICICI Bank is 0.091/0.084 = 4.94 times riskier than Maruti Suzuki (MSPL). Intuitively, the assets that cluster in the center of the network are most susceptible to impacts, whereas those further from the cluster are the least susceptible. The logic from here is straightforward: take the inverse of the relative risk (which we call the “relative certainty”) and normalize it such that it adds to 1. These are the asset weights. Formally,

Next, Let’s visualize the allocation of 100,000 (INR) in our portfolio

Subgraph Centrality-Based Asset Allocation

Bajaj Finance receives nearly 12.58%, Bajaj Auto gets about 12.58%, HUL 8.15%, Infosys 4.52%, and the remaining assets receive less than 0.5% of our capital. To the traditional investor, this strategy may appear “risky” since 60% of our investment is with 5 of our 31 assets. While it’s true if Bajaj Finance is hit hard, we’ll lose a substantial amount of money, our algorithm predicts Bajaj Finance is the least likely to take a hit if and when our other assets get in trouble. Bajaj Finance is clearly the winning pick in our portfolio.

It’s worth pointing out that the methods we’ve used to generate the asset allocation weights differ dramatically from the contemporary methods of MPT and its extensions. The approach taken in this project makes no assumptions of future outcomes of a portfolio, i.e., the algorithm doesn’t require us to make a prediction of the expected returns (as MPT does). What’s more—we’re not solving an optimization problem—there’s nothing to be minimized or maximized. Instead, we observe the topology (interrelatedness) of our portfolio, predict which assets are the most susceptible to the subgraph centrality of volatile behavior and allocate capital accordingly.

Alternative Allocation Strategy: Allocate Capital in the Maximum Independent Set

The maximum independent set (MIS) is the largest set of vertices such that no two are adjacent. Applied to our asset correlation network, the MIS is the greatest number of assets such that every pair has a correlation below Pc = 0.325. The size of the MIS is inversely proportional to the threshold Pc. Larger values of Pc produce a sparse network (more edges are removed) and therefore the MIS tends to be larger. An optimized portfolio would therefore correspond to maximizing the size of the MIS subject to minimizing Pc . The best way to do this is to increase the universe of assets we’re willing to invest in. By further diversifying the portfolio with many asset types and classes, we can isolate the largest number of minimally correlated assets and allocate capital inversely proportional to their relative risk. While generating the asset weights remains a non-optimization problem, generating the asset correlation network becomes one. We’re really solving two separate problems: determing how to build the asset correlation network (there are many) and determining which graph invariants (there are many) extract the asset weights from the network. As such, one can easily imagine a vast landscape of portfolios beyond that of MPT and a metric fuck-tonne of wealth to create. Unfortunately, solving the MIS problem is NP-hard. The best we can do is find an approximation.

Using Expert Knowledge to Approximate the Maximum Independent Set

We have two options: randomly generate a list of maximal indpendent sets (subgraphs of such that no two vertices share an edge) and select the largest one, or use expert knowledge to reduce the number of sets to generate and do the latter. Both methods are imperfect, but the former is far more computationally expensive than the latter. Suppose we do fundamentals research and conclude Bajaj Finance and HUL must be in our portfolio. How could we imbue the algorithm with this knowledge? Can we make the algorithm flexible enough for portfolio managers to fine-tune with goold-ole’ fashioned research, while at the same time keeping it rigged enough to prevent poor decisions from producing terribe portfolios? We confront this problem in the code block below by extracting an approximate MIS by generating 100 random maximal indpendent sets containing Bajaj Finance and HUL.

The generate_mis function generates a maximal independent set that approximates the true maximum independent set. As an option, the user can pick a list of assets they want in their portfolio and generate_mis will return the safest assets to complement the user’s choice. Picking Bajaj Finance and HUL left us with Sun Pharma, Hero Moto Corp amongst others. The weights of these assets will remain directly inversely proportional to the subgraph centrality.

Allocating Shares to the Deep Learning Network Portfolio

In this section we write production (almost) ready code for portfolio analysis and include our own risk-adjusted returns score. The section looks something like this:

We obtain the cumulative returns and returns on investment, extract the end of year returns and annual return rates, calculate the average annual rate of returns and annualized portfolio standard deviation, compute the Sharpe Ratio, Maximum Drawdown, Returns over Maximum Drawdown, and our own unique measure: the Growth-Risk Ratio.

Finally, we visualize the returns, drawdowns, and returns distribution of each model and analyze the results the performance of each portfolio.

Visualizing the Returns

Pictured above are the daily returns for Deep Learning Network MIS (solid green curve), Deep Learning Network (solid blue curve), and the Efficient Frontier portfolio (solid red curve) from 2015 to 2019. The colorcoded dashed curves represent the 50 day rolling averages of the respective portfolios. Several observations pop: (1) Deep Learning Network MIS significantly outperformed Deep Learning Network Portfolio, (2) Deep Learning Network Portfolio substantially outperformed the Efficient Frontier, (3) Deep Learning Network MIS grew 158.4% larger, falling below 0% returns 0 out of all the trading days, (4) Deep Learning Network grew 139.3% larger and (5) the Efficient Frontier grew 49.8% larger. Next, let’s observe the annual returns for each portfolio and compare them with the market.

In comparison, the Nifty50 had a -4.1%, 3%, 28.6%, 3.2%, -1.03% (YTD) annual return rate in 2015, 2016, 2017, 2018, & 2019 (YTD) respectively. Deep Learning Network Portfolio and Deep Learning Network MIS substantially outperformed both the market and the Efficient Frontier. Both Deep Learning Network portfolios grew at an impressive rate. Deep Learning Network MIS grew 19.1% larger than Deep Learning Network Portfolio and 108.6% larger than the Efficient Frontier, while Deep Learning Network Portfolio grew 89.5% larger than the Efficient Frontier. What’s more, Deep Learning Network MIS’s return rates consistently increased about 25% every year, whereas the return rates of Deep Learning Network Portfolio and the Efficient Frontier were less consistent. Deep Learning Network Portfolio MIS clearly has the most consistent rate of growth. We’d expect this rapid growth to be accompanied with a large burden of risk—either manifested as a large degree of volatility, steep and frequent maximum drawdowns, or both. As we explore below, the Deep Learning Network portfolios’ sustained their growth rates with significantly less risk exposure than the Efficient Frontier.

Visualizing Drawdowns

Illustrated above is the daily rolling 252-day drawdown for Deep Learning Network MIS (filled sea green curve), Deep Learning Network (filled royal blue curve), and the Efficient Frontier (filled dark salmon curve) along with the respective rolling maximum drawdowns (solid curves). Several observations stick out: (1) the Deep Learning Network portfolios have significantly smaller drawdowns than the portfolio generated from the Efficient Frontier, (2) both Deep Learning Network portfolios have roughly the same maximum drawdown (about 22%), (3) Deep Learning Network on average lost the least amount of returns, and (4) Deep Learning Network’s rolling maximum drawdowns are, on average, less pronounced than Deep Learning Network MIS. These results suggest the subgraph centrality has predictive power as a measure of relative or intraportfolio risk, and more generally, that network-based portfolio construction is a promising alternative to the more traditional approaches like MPT.

Deep Learning Network and its MIS variant dramatically outperformed the Efficient Froniter on every metric (save Deep Learning Network MIS’s annual volatility). These results give credence to the possibility that we are on to something substantial here as we have passed the criteria of our go/no go test. Outperforming MPT by these margins is no simple feat, but, the real test is whether or not Deep Learning Network Portfolio can consistently beat MPT on many randomly generated portfolios. To wrap up this notebook, let’s take a look at how the returns for each portfolio are distributed and move to the conclusion of Deep Learning Network Portfolio Optimzation.

Visualizing the Distribution of Returns

Above are the returns distribution for each portfolio: Efficient Frontier (in red), Deep Learning Network (in blue), and Deep Learning Network MIS (in green). The Efficient Frontier algorithm somewhat produced a portfolio with a normal distribution of returns; the same can’t be said of the Deep Learning Network portfolios as they’re heavily right-skewed. The right-skewness of the Deep Learning Network portfolios is caused by their strong upward momentum, that is, their consistent growth. In general, we’d expect a strong correlation between the right-skewness of the returns distribution and the growth-risk-ratio.

It’s important to emphasize that deviation-based measures of risk-adjusted performance implicitly assume the distribution of returns follows a normal distribution. As such, the Sharpe ratio isn’t a suitable measure of performance since the standard deviation isn’t a suitable measure of risk for the Deep Learning Network portfolios.

While Deep Learning Network had less pronounced maximum drawdowns it was more frequently bellow 0% returns (1.59% of the time) than its MIS variant (0.53% of the time). These values dwarf that of the Efficient Frontier, which painfully experienced negative returns a third of the time. It’s interesting to note that the maximum loss of the Deep Learning Network portfolio is an order of magnitude smaller than their maximum drawdowns. This relationship is in contrast to the Efficient Frontier’s maximum loss which is on the same order of magnitude as its maximum drawdown. It’s also interesting to point out that Deep Learning Network has a lower probability of falling below its rolling 30, 50, and 90 averages than its MIS variant. Taken together, Deep Learning Network’s smaller average rolling maximum drawdown and smaller probabilities of falling below the above rolling averages indicate its growth is more consistent than its MIS variant. On the one hand, Deep Learning Network’s growth is more consistent than its MIS variant while on the other hand, the MIS variant has a more consistent growth rate. Stated another way: Deep Learning Network’s “velocity of returns” is more consistent than that of the MIS variant’s, whereas Deep Learning Network MIS’s “acceleration of returns” is more consistent than that of the Deep Learning Network portfolio.

Future Portfolio Allocation & Conclusion

A similar analysis as displayed above was repeated for generating the optimal portfolio and the subsequent allocation. Following were the results of the same:

HUL: 11.73% , ITC: 11.73% , Bajaj Auto: 11.73% , Sun Pharma: 11.73% , ONGC: 11.73% , Asian Paints: 11.73% , NTPC: 7.6% , PowerGrid: 7.6% , Tech Mahindra: 3.88% , Infosys: 3.88% TCS: 2.76% HCL Tech: 2.76% HeroMotorCorp: 0.87%

Thus, Sector Allocation proposed by our Deep Learning Network Algorithm is as follows: FMCG: 23.46% , Automobile: 12.6% , Pharma: 11.73% , IT: 13.28% , Energy: 26.93% , Paints & Varnishes: 11.73%

Conclusion

In this article, we built a novel algorithm for generating asset weights of a minimally correlated portfolio with tools from network science. Our approach is twofold: we first construct an asset correlation network with energy statistics (i.e., the distance correlation) and then extract the asset weights with a suitable centrality measure. As an intermediate step we interpret the centrality score (in our case the subgraph centrality) as a measure of relative risk as it quantifies the influence of each asset in the network. Recognizing the need for a human-in-the middle variation of our proposed method, we modified the asset allocation algorithm to allow a user to pick assets subject to the constraints of the maximal independent set.

Both algorithms (Deep Learning Network and Deep Learning Network MIS, including the benchmark Efficient Frontier) were trained on a dataset of thirty-one daily historical stock prices from 2000-2014 and tested from 2015-2019. The portfolios were evaluated by cumulative returns, return rates, volatility, maximum drawdowns, risk-adjusted return metrics, and downside risk-adjusted performance metrics. On all performance metrics, the Deep Learning Network algorithm significantly outperformed both the portfolio generated by the Efficient Frontier and the market–passing our go/no go criteria.

Author
Harsh Shivlani
Team Leader– Fixed Income & Derivatives
(M.Sc. Finance, NMIMS – Mumbai. Batch 2018-20)

Connect with Harsh on LinkedIn
Author
Neil Jha
Team Leader – Fintech
(M.Sc. Finance, NMIMS – Mumbai. Batch 2018-20)

Connect with Neil on LinkedIn
Advertisements

THE PSB MEGA MERGER: AN OVERVIEW

On the 30th of August, 2019, Finance Minister (FM), Nirmala Sitharam announced the merger of 10 major public sector banks (PSBs) to reduce the number of players in the banking scenario from a whopping 27 to 12. This news comes in wake of the disappointing news that India faced a 5% GDP growth in the preceding quarter. It is expected that the merger will increase the CASA (Current to Savings Account Ratio) and enhance lending capacity. These reforms were deemed necessary to foster the idea of India becoming a $5 trillion economy. Illustrated below shall be the expected scenario if the mergers are proven successful:

Merger between

Rank (based on size)

Number of Branches

Total Business Size

(Rs in lakh crore)

Punjab National Bank (A), Oriental Bank of Commerce and United Bank – Merger I

2nd

11,437

17.95 (1.5 times of current)

Canara Bank (A) and Syndicate Bank – Merger II

4th

10,342

15.2 (1.5 times of current)

Union Bank of India (A), Andhra Bank and Corporation Bank – Merger III

5th

9,609

14.59 (2 times of current)

Indian Bank (A) and Allahabad Bank – Merger IV

7th

6,104

8.08 (2 times of current)

(A) Anchor Bank

It was also announced that Rs 55,250 crore of capital infusion will take place to ease credit growth and regulatory compliance. Now we’ll look at the capital infusion expected to take place to aid the mega mergers:

Bank

Recapitalization (Rs in crore)

Punjab National Bank

16,000

Union Bank

11,700

Bank of Baroda

7,000

Canara Bank

6,500

Indian Bank

2,500

Indian Overseas Bank

3,800

Central Bank

3,300

UCO Bank

2,100

United Bank of India

1,600

Punjab and Sind Bank

750

FM also announced multifarious administrative reforms to increase accountability and remove political intermediation. Bank management is made accountable as the board will now be responsible for evaluating the performance of General Manager and Managing Director. It is mandatory to train directors for their roles thus improving leadership in the PSBs. The role of the Non-Official Director is made synonymous to that of an independent director. In order to attract talent, banks have to pay competitive remuneration to Chief Risk Officers.

The banks were merged on three criteria – the CRR should be greater than 10.875%, the CET ratio should be above 7% (which is above the Basel norms) and the NPAs should be less than 6%. However, Syndicate and Canara bank have not been able to meet the criteria.

Post consolidation facts and figures:

  • Total Business Share
  • Ratios (all amounts in %)

MERGER – I

PNB

OBC

United Bank of India

Post-Merger

CASA Ratio

42.16

29.4

51.45

40.52

PCR

61.72

56.53

51.17

59.59

CET-I

6.21

9.86

10.14

7.46

CRAR Ratio

9.73

12.73

13

10.77

Net NPA Ratio

6.55

5.93

8.67

6.61

MERGER – II

Canara Bank

Syndicate Bank

Post-Merger

CASA Ratio

29.18

32.58

30.21

PCR

41.48

48.83

44.32

CET-I

8.31

9.31

8.62

CRAR Ratio

11.90

14.23

12.63

Net NPA Ratio

5.37

6.16

5.62

MERGERIII

Union Bank

Andhra Bank

Corporation Bank

Post-Merger

CASA Ratio

36.10

31.39

31.59

33.82

PCR

58.27

68.62

66.60

63.07

CET-I

8.02

8.43

10.39

8.63

CRAR Ratio

11.78

13.69

12.30

12.39

Net NPA Ratio

6.85

5.73

5.71

6.30

MERGER – IV

Indian Bank

Allahabad Bank

Post-Merger

CASA Ratio

34.75

49.49

41.65

PCR

49.13

74.15

66.21

CET-I

10.96

9.65

10.63

CRAR Ratio

13.21

12.51

12.89

Net NPA Ratio

3.75

5.22

4.39

Advantages:

  • Economies of scale.
  • Efficiency in operation.
  • Better NPA management.
  • High lending capacity of the newly formed entities.
  • Strong national presence and global reach.
  • Risk can be spread over and thus will be minimized.
  • Lower operational cost leading to lower cost of borrowing.
  • Increased customer base, organic growth of market share and business quantum.
  • Banking practices reform announced to boost accountability and professionalism.
  • Appointment of CRO (Chief Risk Officer) to enhance management effectiveness.
  • Centralized functioning promoting a central database of customers.

Disadvantages:

  • The slowdown witnessed by the economy coupled with the dangerously low demand in the automobile sector will maintain the existing situation pessimism.
  • The already existing exposure of NBFCs in the individual constituent banks will be magnified as the merged entities shall have more than 10% loan exposure to NBFCs and thus, in effect, the liquidity pressure that comes along with it.
  • As history dictates, the merger of these eminent banks will cause near-term problems with respect to restructuring, recapitalization, operation, flexibility and costs.
  • Near-term growth shall be hindered and core profitability may suffer.
  • Compliance becomes a huge barrier.
  • Difficult to merge human resources and their respective work cultures post-merger – this will in turn lead to low morale and inefficient workforce

Outlook:

The mergers were announced with a very noble idea in mind; however, the timing is a bit unfortunate. During these times of economic slowdown, India needs its bankers devoting their time to boost the economy. With the merger happening, the banks will be more pre-occupied with the integration process rather than enhancing the economic growth. Merely combining banks will not help enhance credit capacity, it is also important to see whether synergies in reality will be created (or if it is merely on paper).

The share of assets of the top three or four banks account for only 30%-32%. Thus, the banks still remain fragmented for a major part – systemic risk or contagion effect shall not be a problem as of now. Although this is the case, out of the four mergers not one of them can be said to be financially strong. This is a phenomenon of blind leading the blind; it cannot be expected that two financially weak banks can merge into one financially strong entity. “A chain is only as strong as its weakest link.”

This announcement comes at a time when even the results of the previous mergers (e.g. Bank of Baroda) have not yielded any fruit and the PSBs have recently jumped back from a long stress scenario. It seems as if there is no common theme in the mergers (i.e. retail, corporate or SME), no particular skill-set that has been emphasized upon. Rather, it was just assumed that all the banks fall under the same template and a haphazard combination was made – in such a case, there is a slim chance of synergy creation. Also, with no major theme in hand the multifarious objectives will confuse the banks with respect to the pressing matters at hand.

According to technical experts, it might take around three to four years to integrate the existing IT systems of the banks. Although all of the use the CBS, heavy customization is required, mobile apps need to be in sync, backend functions have to be centralized effectively.

As for the case of resolution of NPAs, it might actually become easier and faster. Earlier, the bankers had to talk to their counterparts, the approach the senior management to come to a resolution. Now, with these institutions merging and with lesser levels to report to, a solution plan can be implemented at the earliest with considerably less effort. Apart from this, now that the banks will have a common database and a larger network, they can increase the services offered at a higher level at lower costs – this might show an increment in the fees earned and in turn, the profitability. It is expected that the Anchor banks will be benefitted more from the mergers as the swap ratio will be in their favour.

Author
Chandreyee Sengupta
Team Member- Equity Research & Valuation
(MSc Finance, NMIMS Mumbai. Batch 2019-21)

Connect with Chandreyee on LinkedIn

CATASTROPHE BONDS – Fortune From The Disaster

Catastrophe Bonds simply were known as the “Cat Bonds” is a financial instrument where the issuer issues bonds for re-insurance against the natural disaster or a catastrophe. The insurance company issues bonds as collateral against the catastrophe insurance. Cat bonds have a high yielding feature with a duration of 2 years to 5 years. Cat bonds transfer the risk of insurance into the capital market.

History for development of cat bonds can be traced back in the 1990s when the claims filed by clients against hurricane Andrew couldn’t be acknowledged and the insurance industry suffered humongous losses. Many insurance companies that earlier provided catastrophe risks decided to leave the insurance sector and about eleven insurance companies filed for bankruptcy. Therefore, there was a need to cover the capital by catastrophe insurance-linked bonds.

Working of the CAT Bond:

As this bond transfers the risk from insurance company to the financial markets. The amount which is pooled out from the investors is transferred to the Special Purpose Vehicle (SPV). There is a reinsurance agreement between the SPV and the insurance company which dictates the terminology and clauses for the amount to be paid during the catastrophe. The SPV invests it into the capital market and to manage the security. The returns from the financial market are further passed to investors of cat bonds. They are mostly invested in money market instruments with low risk. They are high yield debt instruments. These SPVs fulfill the claims of the risk carrier i.e. insurance company if any catastrophe occurs or as the terms of an agreement are fulfilled.

For instance, a family living in Florida where hurricanes are most likely to happen they approach for Hurricane insurance from the General Insurance Company. The insurance company will provide such insurance since they get good premiums but still hang back because if the hurricane occurs they will have to pay a huge amount as indemnity. The solution to the problem is by issuing cat bonds they won’t incur huge losses. If the event is not triggered at the maturity then the collateral account by SPV will be liquidated and the proceeds will be returned to the investor. But if the event triggers then the collateral is liquidated where some or all the proceeds are passed on to the sponsor.

Figure 1: Process of CAT Bonds
Source

Investor’s Perpective:

A cat bond is a lookalike corporate bond with a pre-determined coupon rate. These bonds are not related in any way to the global markets. A financial crisis has nothing to do with the trigger of a natural disaster or catastrophe. They are built on floating rates notes where the investor benefits the return not only from the risk premium of the cat bond sponsor but also the returns from the money market where the pooled amount is invested. Since these bonds are not linked with capital markets, investors view such bonds to diversify their portfolios to minimize the risk related to markets. Over the years the cat bonds have shown great growth and seemed to be a lucrative investment option. Performance of cat bonds Index, Insurance-Linked Securities-Hedge Fund (ILS-HF), Equities and Bonds Index is shown below. Figure 2 to Figure 4 shows why cat bonds are considered to diversify their portfolio and have been alluring over the years.

Figure 2: Performance of Cat Bond Index versus other Financial Instruments Index
Source


 

CAT BOND

ILS HF

EQUITIES***

BONDS****

INDEX*

INDEX**


 


 

Total Return

166.4%

89.9%

124.3%

55.9%

Volatility

3%

3%

15%

5%

Annualized return

7.9%

5.1%

6.50%

3.5%

Sharpe Ratio

2.39

1.69

0.45

0.69

Figure 3: Comparing Returns and Volatility ( Source )


 

CAT BOND

ILS HF

EQUITIES***

BONDS****

INDEX*

INDEX**

Cat Bond Index*

1


 


 


 

ILS HF Index**

0.87

1


 


 

Equities***

0.18

0.1

1


 

Bonds****

0.17

0.14

0.39

1

Figure 4: Correlations ( Source )

Benefit for the Economy:

It is next to impossible to bear the shock of catastrophe alone by the insurance companies. The financial markets are stronger and capable to bear the economic effect of the catastrophe. So, to benefit the quantum of financial markets for the effect of catastrophe, was when the establishment of catastrophe bonds came into existence after Hurricane Andrew 1992.

The use of cat bonds is mainly to protect and manage risk associated with the disaster. The development of cat bonds is growing rapidly over the years for developing economies as well. Countries and regions in the risk-prone areas are many a time not insured or is backed by government funding for the upliftment of the economy.

This new insurance-linked product has led the World Bank providing a framework for the same known as the “MultiCat Program”. This has given aid to Mexico’s Caribbean islands to issue cat bonds by structuring themselves using the framework provided by the World Bank. The intrinsic value of these bonds is to provide for the recovery of the loss incurred and transfer the risk to those willing to take the risk. Financial investors have turned around to this investment option as an asset class with higher returns and low or no correlation with the financial markets. But today cat bonds are proving themselves as a social-driven investment instrument and new breed for this cat bonds are coming are known as the pandemic bonds which will help to combat the life-threatening diseases.

Indian Scenario about Cat Bonds:

When the world is booming and progressing on different financial products India cannot step back but indeed tries to be in the race. Yes, it is trying to come up with the debutant of its cat bonds in the Indian Economy. General Insurance Corporation of India (GIC), is the country’s foremost reinsurer that has come upon the thought of issuing cat bonds on the wakeup call of the Uttarakhand floods in 2012. GIC had to pay approx. 2000 crores of claims settlement from their treasure chest. E.g. If GIC issued cat bonds worth 1000 crores in 2011 with the maturity of three to five years, on triggering of the event they would have to shed only 1000 crores.

India being a developing economy, many parts of the country are risk-prone areas like aforesaid floods, cyclones, landslides and very rare symptoms of earthquakes in the regions of Rajasthan, etc. Let’s assume India agrees to pay at 12% – 14% coupon on cat bonds in India, it would likely get the subscription of Pension Funds, Hedge funds or high net worth individuals since they are attracted to benefiting from high-interest yields over the short tenure of the bonds. The government should try and come out with such bonds and mitigate the losses for its own.

Thus, Catastrophe Bonds a savior to the economy by passing on the risk to the risk bearing financial investors.

Author
Lorretta Gonsalves
Team Member- Alternate Investments (M.Sc. Finance, NMIMS – Mumbai. Batch 2019-21)

Connect with Lorretta on LinkedIn


 

Is RLLR the long lost saviour?

Before we get into the topic lets know what repo rate is. It is the rate at which the central bank of a country (RBI) lends money to the commercial banks in event of any shortfall of funds.

Now people are excited about the new repo linked lending rate (RLLR) which has come into the market but let’s just roll back a few months back to December 2018. RBI in its fifth bi-monthly monetary policy review on December 5th made a big announcement (by N.S. Vishwanathan – Deputy Governor) that many bank customers were waiting for and that retailed loans will be linked to external benchmarks instead of various internal benchmarks produced by banks.RBI had instructed the banks to start the process the linking the new repo rates from 1st April 2019.

By adopting a single benchmark, the home loans for example which are linked to the marginal cost of funds will now be linked to the repo rates. This makes the banks bound to revise the rates of the home loans instantly as and when there is a change in the RBI repo rate. These changes are welcomed by the customers because it has been long sought out as RBI reduced the repo rates to control the inflation but the benefit never reaches the consumers but now because of this, the consumers will get the benefit of at least some lower interest rate to be paid. The benefit of Repo-rate linked home loan scheme is that it is transparent compared to existing loans linked to marginal-cost-of-fund based lending rate (MCLR). The interest rates on loans will change upwards or downwards in line with the movement of the repo rate announced by RBI.

The RBI had stated that banks should benchmark the rates to either the RBI policy repo rate or Government of India’s 91 or 182 days Treasury bill yields as developed by the Financial Benchmarks India Private Ltd (FBIL) or any other external benchmark developed by the FBIL but still several banks opposed the decision of linking lending rates to an external benchmark, indicating that their cost of funds was not linked to those external benchmarks and delayed the implementation indefinitely.

By March 2019 the only bank to realize this directive was SBI, the largest public sector bank but it too took some time and made it effective from July 2019. Following the same footsteps, Bank of Baroda too introduced RLLR home loan scheme from 12th August 2019 and Syndicate Bank. Allahabad Bank, Canara Bank & Union Bank of India and other banks will announce their plans to launch RLLR soon.

To be eligible for the SBI repo rate linked home loan scheme, the borrower should have a minimum annual income of Rs 6 lakhs and tenure of the loan is up to 33 years. In the case of under-construction projects, the maximum moratorium period up to two years is offered over and above maximum loan tenor of 33 years. So, in such cases, the total loan tenure cannot exceed 35 years.

In this home loan scheme, the borrower needs to repay a minimum of 3 per cent of the principal loan amount every year in equated monthly instalments. If you take a home loan of Rs 50 lakhs, you need to repay a minimum of Rs 1.50 lakhs as principal plus the interest cost every year.

The interest rates in this scheme are not directly linked with the repo rate figure announced by the RBI. The interest on the loan is 2.25% points more than the repo rate. On July 1, the repo rate was 5.75 per cent, so the repo-linked lending rate is 8 per cent. But, the repo-linked lending rate may change effectively from September 1 as we had a repo rate cut of 35 basis points (bps) announced by the RBI in August.

Currently, RLLR is at 8 per cent. Banks will maintain a spread over and above RLLR of 40 to 55 bps. So, the effective rate for home loans up to Rs. 75 lakhs range from 8.4 per cent to 8.55 per cent. For home loans above Rs 75 lakhs, the effective rate is 8.95 per cent to 9.10 per cent (i.e. spread of 95 to 110 bps on RLLR of 8 per cent). With effect from 10th August, the home loan rates linked to MCLR would be 8.6 per cent to 8.85 per cent at SBI, which is more than RLLR.

Similarly, for Bank of Baroda MCLR linked home loan rate starts at 8.45 per cent, while the repo-linked rate starts at 8.35 per cent. At present its 5 bps cheaper than SBI’s repo-linked home loan scheme. Repo rate linked home loan scheme will be beneficial to borrowers with immediate savings when the interest rate goes down.” For instance, with a further 50 bps rate cut as expected in the next year, there will be further savings for borrowers on interest.

Let aside the interest rates alone, if you choose to switch for an RLLR home loan there are more added costs to be noticed, for instance, SBI levy’s transfer and processing charges of 0.35% on the amount of loan plus GST. The minimum fees shall be Rs 2,000 and the maximum can go up to Rs 10,000 plus GST. These charges may vary from bank to bank

One should wait a bit longer as other banks are also coming up with this scheme so one can choose a home loan from the bank of his choice and preference but also take into consideration the charges and extra paperwork, hassle and time to keep a tab on both accounts one home loan and other accounts (savings, joint etc). One has to take into consideration the fact that if you choose another bank apart from your savings bank look at the spread (margin) the bank is charging over and above RLLR. Check the impact of the spread between RLLR and the final rate of interest offered. Stick to the ones which offer the least spread as it reflects RBI’s repo rate policy correctly.

It’s also to be noted that the RLLR is effective from the following month after RBI monetary policy announcement. But, the borrowers also need to be aware and prepared that RBI can increase the repo rate due to the economic factors.

As far as the private banks are concerned; from Axis Bank, Mr Rajiv Anand, executive director for corporate lending said, “It’s not necessary to use only external benchmarks; there are multiple avenues to meet the requirement that the RBI wants us to do… What RBI is essentially looking at is that the rates are being cut and there should be better transmission”. More details on this weren’t revealed whether Axis bank is planning to offer RLLR but he did mention “Axis Bank’s asset-liability committee will take a call on the same.”

Hence, this scheme is to target customers & borrowers who reside in Tier 1 or Tier 2 cities and having an annual steady income of Rs.6 lakh. So before switching your home loan take note of the above points as to charges, the spread between RLLR and final interest rate and also if the central bank may increase the repo rate due to economic scenario.

Author
Rishi Khanna
Team Member- Equity Research & Valuation
(MSc Finance, NMIMS Mumbai. Batch 2019-21)

Connect with Rishi
on LinkedIn

Valuation methods and issues that arise while conducting valuation

Valuation of a Business is conducted in order to arrive at an estimation of the Economic Value of an Owner’s Interest in a certain Business under the guidance of a certain set procedures. Valuation may be computed for a business in order to arrive at an accurate snapshot of the Financial Standing of the business which is presented to Current or Potential Investors. Valuation is generally conducted when a company is looking to merge with another company or acquire another company or sell off the entire or a fragment of its operations to another company. Some other reasons to conduct Valuation include establishing partner ownership, taxation, analysing the financial strength of the business i.e. determining solvency, planning for future growth and profitability of the business or even divorce proceedings. Further is a brief description of the approaches to Valuation Models and the issues that arise when conducting Valuation under those methods.

There are Three Different Approaches which are commonly used in Valuation:

  1. Income Approach
  2. Asset Based Approach
  3. Market Approach

Further is a brief description of the approaches to Valuation Models and the issues that arise when conducting Valuation under those methods.

I. INCOME APPROACH

  • Under the Income Approach, Valuation is based on the Economic Benefit expected from the investment and the level of risk associated with the investment.
  • There are several different Income Methods which include Capitalisation of cash flow or earnings, Discounted Future Cash Flows which is commonly known as DCF and the excess earnings method.
  • DCF is the Net Present Values of the Cash Flows projected by the company. The Value of an asset is intrinsically based on its ability to generate Cash Flows is the underlying principle of this approach.
  • This method relies more on the fundamental forthcoming expectations of the business rather than on the public market factors.

ISSUES THAT ARISE WHEN CONDUCTING VALUATION USING INCOME APPROACH:

1. USING ACCOUNTING PROFITS INSTEAD OF CASH FLOW

  • The value of a business depends largely on the profitability, financial health and earning power. Accounting Profits and Cash flows are two means to measure it.
  • Free cash flow is a better means to analyse profitability as compared to accounting profit because the Revenues and expenditures of the business are accounted for at the right time and the cash flows of a business cannot be manipulated as much as earnings.

2. OVERLY OPTIMISTIC REVENUE FORECASTS

  • At times while projecting the forecasts, the revenue is shown to be shooting up in numbers during the forecast period. This results when taking a hypothetical high growth rate.
  • What the valuer fails to notice is whether the growth of the company is aligned with the industry, what the market size of the company is or even whether the company has a strategy to achieve the desired growth goal.

3. NARROW FORECAST HORIZON

  • What should be taken as the optimal length of the Financial Forecasts is one of the key choices that need to be made.
  • If a shorter forecast period is considered, it fails to give the effect of different parameters on the business in the upcoming years. For example in case of a company under FMCG sector, it would not be right to prepare financial forecasting for a period of just two to three years.
  • On the other hand is the length considered is too long, the valuation could result as misleading. This is because in the long run, risks associated with the business cannot be anticipated easily.
  • Thus, it is essential to consider an explicit time frame while conducting valuation that is neither too short nor too long. A time frame ranging from 5 to 7 years is generally considered when performing DCF Valuation.

4. INCORRECT BETA

  • Beta comes into consideration when deriving the Cost of Equity of a company.
  • When Valuation of a company is done under the circumstances of a merger or an acquisition, majority of the times, the Beta is taken to be that of the Acquiring Company. This is done under the assumption that the Target Company is a smaller company when compared to its bidder, thus the Target Company would have no influence on the resulting Capital Structure as well as the riskiness of the New Company.
  • Other times, the Beta considered is an estimation of the emerging company’s Beta with respect to a Market Index. But just using the historical beta is very risky when the company or its future risk prospects are not analysed.
  • At times, when levering and unlevering the Beta to arrive at the estimate, incorrect formulae are used. The levering should depend on the amount of debt prospect of the company in future.

5. HIGH COST OF EQUITY

  • Along with Beta, another problem that arises in deriving the Cost of Equity is the Risk free Rate.
  • Majority of the times the Risk free rate considered is just the 10 year Government Bond Yield.
  • What one fails to consider in this is the Country Risk. If this view is taken into consideration, the Cost of Equity of a company in United States would be same as that of a company in Bolovia, which is highly incorrect.
  • Thus, the Country Risk Premium needs to be deducted to arrive at an accurate Risk Free Rate.

6. INCORRECT DISCOUNT RATES

  • It is a wrong notion to consider a higher discount rate when there are higher risk cash flows, on the basis that the discount rate on cash flows should reflect the riskiness.
  • Generally Book Values of Debt and Equity for arriving at the Weighted Average Cost of Capital (WACC). But this violates the Basic Principle of Valuation which is to arrive at a Fair Value
  • Thus, when valuing an on-going business, the market values of debt and equity should be taken into consideration to derive the WACC.

7. HIGH LONG TERM GROWTH RATE

  • There is a Material Impact created on the value of a company when a long term growth rate is used.
  • This is considered when arriving at the Terminal Value. The Terminal Value is the Present Value of all the Cash Flows at a future point in time, when the cash flows are expected to be at a stable growth rate.
  • These Long Term growth rates generally lie in the range of 5% to 6%.
  • They depend on the growth rate of the economy and never exceed that figure. This is because, a higher growth rate than the GDP rate of the economy would imply that the company would grow larger than the economy. Applying such a high rate would result in overvaluation.

II ASSET BASED APPROACH

  • Under this approach, the value of a business is derived as a sum of its parts. This method takes into account all the assets and liabilities of the Business.
  • The Value of the Business is the difference between value of all relevant assets of the business and value of all the relevant liabilities.

III MARKET APPROACH

  • This approach is used to derive the appraisal value of the business, intangible asset, security or business ownership interest by considering market prices of comparables which have been sold recently or are still available.
  • There are two main Valuation Methods under this approach-
    1. Comparable Companies Method – This method entails the use of valuation multiples of companies which are traded publically.
    2. Comparable Transactions Method – This method entails the use of valuation figures of observed transactions of companies in the same industry as that of the Target Company.
  • Certain common multiples considered for Relative Valuation are – P/E Ratio, PEG Ratio, EV/Sales, EV/EBITDA, EV/ Sales.

ISSUES THAT ARISE WHEN CONDUCTING VALUATION USING MARKET APPROACH:

1. INCORRECT PEER SELECTION

  • The industries in a market are often loosely defined. Making is difficult to select optimum peers to conduct Comparable Company Analysis.
  • Some of the major factors to be considered while selecting peers are product line, geography, seasonality, revenue, etc.
  • Another way to identify peers it to check the annual report of the companies, in case the company is a listed one, where the peers would be mentioned.
  • The same could apply for a Comparable Transaction Analysis. Where Multiples of an extra ordinary Transaction are considered for conducting Valuation.

2. INCORRECT MULTIPLES

  • There are a number of Multiples available to value the worth of a business. Each of these Multiples relate to a specific extent of the financial performance to the potential selling price of the business.
  • If a multiple is based on the Net Cash Flow, it should not be applied to the Net Profit.
  • For valuing new companies, which have small sale and negative profits, using multiples such as Price-to-Sales or Enterprise Value to EBITDA Multiples can be misleading. In such cases, Non-Financial Multiples can be helpful.
  • Certain common multiples considered for Relative Valuation are – P/E Ratio, PEG Ratio, EV/Sales, EV/EBITDA, EV/ Sales.

3. NOT ADJUSTING THE ENTERPRISE VALUE TO EBITDA MULTIPLE FOR NON-OPERATING ITEMS

  • The Enterprise Value should not include excess cash. Also the Non-Operating Assets must be evaluated separately.
  • Operating leases must be considered in the Enterprise Value, the interests costs associated to such operating leases must also be added back to the EBITDA Value.
  • This is because though the Value of Lease and the Interest Cost of the lease, affect the ratio in the same direction, the effect is not of the same magnitude.

4. TAKING AVERAGE INSTEAD OF MEDIAN

  • While conducting Relative Valuation, it is a common practice to consider the Average value of the PEER’s multiples instead of Median value.
  • The middle element of the data is the Median Value. Taking Median Value enables the extremely high or low values to be disregarded.

5. USING RELATIVE VALUATION AS PRIMARY VALUATION METHODOLOGY

  • Valuation should not be derived by depending on just one methodology, especially just Relative Valuation. Relative Valuation is a considerably good method to validate the value derived from other Valuation Methods.
  • One issue of relying on Relative Valuation is that getting data of a privately owned business is difficult. Also the shares of a public company are more liquid than that of a private company.

CERTAIN OTHER COMMON ISSUES THAT ARISE WHEN CONDUCTING VALUATION:

1. CONSIDERING VALUATION IS A SCIENTIFIC FACT

  • Most of the times it is asserted that Valuation is a Scientific Fact rather than an Opinion.
  • A logical process is followed to reach a Valuation Figure or Opinion, thus there is the role of Science.
  • But what is forgotten is that the Value arrived at from any Valuation Method, is contingent to a set of assumptions and expectations. These expectations include future prospects of the company, industry or even the country. Another thing which is factored in is the Valuer’s appraisal of Company Risk.
  • Hence, valuation is more of an Art than a Science.

2. ASSUMING THAT EVERY ESTABLISHED BUSINESS HAS A POSITIVE GOODWILL

  • Business Goodwill is actually directly related to the earning power of the business.
  • If the Business earnings fall below the return on assets, then the business has a negative goodwill.

3. FAILING TO ASSESS COMPANY SPECIFIC RISK

  • When conducting Business Valuation, risk assessment plays a very important role.
  • Each company has different financial and operational factors which contribute to its risk profile.
  • Thus, each company has different Discount and Capitalisation rates which need to be taken into consideration.

4. REDUNTANT ASSETS ARE NOT ADDED TO COMPANY VALUE

  • Redundant assets are those which are not required for the day to day operations of the business. The value of such assets should be added to the value of the business or company.

5. THINKING THAT THE BUSINESS PURCHASE PRICE AND PROJECT COST ARE THE SAME

  • Many a times the project cost is considered to be the same as the purchase price. But that is not correct.
  • In order to arrive at the Purchase price s=certain adjustments need to be made to the project cost.
  • One such adjustment is that the buyer of the business also needs to inject certain working capital.
  • If there is any deferred equipment, its maintenance cost also needs to be adjusted.
  • There are certain investments which are needed to maintain the income stream such as hiring staff replacements, licences, regulatory compliances, etc. Such costs also need to be adjusted.

The method which has the capability to incorporate all the significant factors which have a material effect on the Fair Value is the Most Appropriate Method of Valuation.

Furthermore, one must keep in mind the above issues which can arise while deriving Valuation for a Business, Stock or Company in order to avoid any misleading valuation figures.

Author
Vhabiz Lala
Volunteer – Equity Research & Valuation (M.Sc. Finance, NMIMS – Mumbai. Batch 2018-20)

Connect with Vhahbiz on LinkedIn

China is coming up with cryptocurrency – shock or surprise?

The meeting of finance and technology, widely known as fintech, is changing the landscape of investment management. As the saying goes, it’s tough to make predictions, especially about the future events. But it’s manifestly worth the effort because catching big trends is how fortunes are made and catastrophic losses are avoided.

Blockchain-related topics are extremely hot nowadays and cryptocurrencies are one of those. So, what is a cryptocurrency? From the word itself you can see that it has something to do with cryptography and currency. For its part, cryptography is the process of converting ordinary plain text into unintelligible text and vice-versa. Modern cryptography deals with confidentiality: information cannot be understood by anyone, integrity: information cannot be altered, and authentication: sender and receiver can confirm each other.

Putting all the pieces together, cryptocurrency is a medium of exchange value (just like ordinary money) that exists in the digital world and relies on encryption, which makes transactions secure. A cryptocurrency is an alternative form of payment to cash, credit cards, and cheques. The technology behind it allows you to send it directly to others without going through a third party like a bank. In short, cryptocurrencies are like virtual accounting systems.

As you can find, there are many exciting use cases for this. You can send money back to your family without incurring large international fees if you’re working in a different country. Merchants no longer have to worry about payment fraud because people can only spend what they have. Summing up, Cryptocurrency is a radically new way of paying that makes all the transactions secure and helps to get rid of intermediaries represented by banks, which also contributes to a significant reduction in the commission fee.

The cryptocurrencies can either be based on blockchain technology or can be centrally issued, circulated within a community or geographic location, or tied to fiat currency. Blockchain is a revolutionary ledger technology, with a wide array of potential applications from smart contracts to healthcare systems, but it did not catch the attention of speculators and the media until Bitcoin surged from $0.009 to more than $11,000 per coin. There are more than 869 cryptocurrencies, but without fundamentals, they are little more than “trust machines” and, as such, are nearly unanalyzable. They generate no cash flow, making discounted valuation approaches inapplicable, but this criticism applies to gold as well.

Although it is cheaper to invest in the early stages, during a new cryptocurrency’s initial coin offering, doing so may overlook the network effect that favors older altcoins (alternative cryptocurrencies other than bitcoin).

Cryptocurrencies are going to play a major role in the coming years and China has decided to be part of that future, in a big way. China’s official digital currency is nearly ready. As much as China frowns on cryptocurrency, it’s happy to introduce its cryptocurrency. There is a great deal of confusion and misunderstood facts surrounding the legal status of cryptocurrency in China. Various headlines like China Bans Bitcoin, China Bans Crypto Exchanges, China Bans Bitcoin Mining, and many more make most people unclear on where China stands on cryptocurrency and whether that has any real impact on how its citizens behave.

The People’s Bank of China has revealed that its digital currency, “can now be said to be ready” after five years of research work. Don’t expect it to mimic crypto, however. According to payments Deputy Chief Mu Changchun, it’ll use a more complex algorithm and structure. This project of coming up with own cryptocurrency of China was started by the former governor of China’s central bank, Zhou Xiaochuan, who retired in March. He decided to come up with the digital currency which will protect China from having to adopt a technology standard, like Bitcoin, designed and controlled by others. 

Facebook Inc.’s push to create cryptocurrency Libra has caused concerns among global central banks, including the People’s Bank of China (PBOC), which said the digital asset must be put under central bank’s supervision to prevent potential foreign exchange risks and protect the authority of monetary policy. Sun Tianqi, an official from China’s State Administration of Foreign Exchange, said, “Libra must be seen as a foreign currency and be put under China’s framework of forex management”. Dave Chapman, executive director at BC Technology Group Ltd also said on similar lines that, “It is without a doubt that with the announcement of Libra, governments, regulators and central banks around the world have had to speed up their plans and approach to digital assets. They have to consider the possibility that non-government issued currencies could dramatically disrupt finance and payments.”

How the cryptocurrency issued by China will be different from other cryptocurrencies, might be one of the questions coming to your mind. To begin with, in launching the new cryptocurrency, referred to as DC/EP for Digital Currency/Electronic Payment, the People’s Bank of China (PBOC) has stolen a march on both Facebook and other central bankers who have been discussing the possibility of a cryptocurrency and how it’s the implication. What sets China’s DC/EP apart from libra and Mark Carney’s(Bank of England’s Governor) “synthetic hegemonic currency” (SHC), according to Paul Schulte(The founder and editor of Schulte Research, a company does research on banks, financial technology, bank algorithms, and credit algorithms), is that while libra is little more than early-stage computer code and the SHC doesn’t appear to have gone much further than Carney’s mind, the Chinese cryptocurrency is ready to launch. “China is barreling forward on reforms and rolling out the cryptocurrency,” says Schulte, who now runs a research firm. PBOC will be the first central bank to come up with its cryptocurrency. Unlike the decentralized blockchain-based offerings, this one could give Beijing more control over its entire financial system. It would increase the PBOC’s ability to root out risks and crackdown on money laundering. It could also give the government an unprecedented window into individuals’ private lives.

Deputy Chief Mu Changchun described the central bank’s “two-tiered” system, wherein the bank would create the cryptocurrency and a small group of trusted commercial businesses would “pay the central bank 100% in full” to be allowed to distribute it. This dual delivery system is suitable for national conditions of China. It can not only use existing resources to mobilize the enthusiasm of commercial banks but also smoothly improve the acceptance of the digital currency across China. If China’s leaders agree on with this idea of a legal cryptocurrency for the whole country, its introduction will likely be gradual. Early adopters would be barred from using it on investment products, a person familiar with the central bank’s plans says, which would make the impact on monetary policy negligible. 

“China’s strategic plan is to integrate more closely with the rest of the world. Cryptocurrency is just one of the means to have a more internationalized renminbi. It’s all strategic. It’s all long term”, said Charles Liu, chairman of HAO International, a private equity firm investing over $700 million in Chinese growth companies. Finally, the Chinese government said that the cryptocurrency could launch as soon as November 11, China’s busiest shopping day, known as Singles Day.

Author
Pratik Jaju
Team Member– Fintech
(M.Sc. Finance, NMIMS – Mumbai. Batch 2019-21)

Connect with Harsh on LinkedIn
Co-author
Omkar Pawar
Team Member– Fintech
(M.Sc. Finance, NMIMS – Mumbai. Batch 2019-21)

Connect with Omkar on LinkedIn

The Curious Case of Quiescent Inflation & Negative Yielding Junk Bonds

One of the most important questions being asked in financial media today appears to be, “Is the Phillips Curve Dead?”

Before I go jump into the analysis of whether that is the case and what impact will it have on the future course of monetary & fiscal policy, let me give me a brief explainer about the concept of The Phillips Curve.

A.W. Phillips stated that there was a trade-off between unemployment and inflation in an economy. He implied that as the economy grew, unemployment went down, this lead to tighter labor markets. Tighter labor markets warranted higher wage increases. Companies, in order to maintain their margins, would pass this higher input cost to the consumers which would then be reflected in the CPI (Consumer Price Index- a gauge of inflation) that we refer to.

You could say this was the case before the 1980s, however, since then, the relationship between the two seems to have hit a “rough patch” or flattened out.

Source: Bank for International Settlements (BIS)

The above chart regresses PPI Inflation (%) with the growth in Unit Labour Costs (%). As defined by OECD,

“Unit labor costs (ULC) measure the average cost of labor per unit of output and are calculated as the ratio of total labor costs to real output.

A rise in an economy’s unit labor costs represents an increased reward for labor’s contribution to output. However, a rise in labor costs higher than the rise in labor productivity may be a threat to an economy’s cost competitiveness, if other costs are not adjusted in compensation.”1

Just from the chart, one can infer that the slope of the regression, R2 or the link between PPI inflation and ULC growth has flattened significantly when you compare the data pre and post 1985.

Hence, I would like to devote a major portion of this article on exploring the structural changes in world economies that have led to this compelling phenomenon.

Lower bargaining power emanating from a declining share of income that accrues to labor

  • A June 2018 research paper titled, “Productivity and Pay: Is the link broken”2 suggests, that post-industrialization (or since the 1980s), median compensation grew by only 11% in real terms, and production workers’ compensation increased by a meagre 12%, compared to a 75% increase in labor productivity. Since 2000, average compensation has also begun to diverge from labor productivity.
  • Apart from the weaker link between the above two variables, the continued sluggishness in wage growth can largely be attributed to productivity growth being far weaker than it was before the crisis.3

Globalization & The Threat of Production Relocation

  • The increased integration of production and complex supply chains connecting advanced economies with emerging market economies, outsourcing along with the relatively smooth and easy flow of money and information across borders have forced workers in rich countries to compete with those in poorer ones
  • The IMF World Economic Outlook (2017) attributes about 50% of the fall in labor share in developed economies to technological advancement, with the fall in the price of investment goods and advances in ICT encouraging automation of routine tasks

Declining Path of Unionisation

As unionization declines, the collective bargaining power of employees starts diminishing. For example, in the United States % of employees enrolled in a trade-union membership has steadily declined from 20% to 10% over the past few decades.

This makes it more difficult for the workers to capture a larger share of the productivity gains enjoyed by the firm as a whole.

Hence, we observe that wage growth in real terms has hardly seen a meaningful increase.

The shift from manufacturing to service economies and the era of automation

  • With the heightened contribution of artificial intelligence and automation in the manufacturing process, firms are able to substitute labor with capital and even the high-quality blue-collar jobs are at stake.
  • From an economic efficiency standpoint, it makes sense for a firm to get more work done for the same or lower cost than to waste resources in hiring and training employees. This could partly explain the delinking of productivity and wage growth.
  • With global PMIs crashing into contraction territory across the world economies due to a host of factors such as dollar strength seen in 2018 (80% of global bank trade credit is denominated in dollars), uncertain CapEx or investment environment due to trade wars among others, we have seen consumption stayed relatively resilient.
  • This may be partly attributed to the transition of economies reliance from manufacturing to services, as a result, the share of employment in services has also jumped in recent years.

Quantum Pricing & Long Term Inflation Expectations

  • The traditional theory states that wages are stickier than prices. If so, profit margins should ideally rise if demand increases. However, after studying firm-level behavior we observe that they tend to abstain from margin expansion for the sake of higher market share. Also, firms unable to generate sufficient sales tend not to reduce prices proportionately to avoid losing cash to meet their rising debt and interest burdens (which explains why we saw inflation falling less than expected during the GFC).
  • Firms have since been engaging in “Quantum Pricing” where firms may change the quality or composition of their products to adjust for production cost volatility instead of increasing prices across the board. This, in turn, makes prices stickier while keeping margins stable. It becomes increasingly complex for mainstream macroeconomic models to capture such structural shifts in pricing affecting inflation.4
  • In a nutshell, all the above factors along with weak cyclical pressures drag longer-term inflation expectations lower (as observed by the 5Y5Y forward breakeven inflation, etc). Lower expectations through their negative feedback loop anchor inflation lower to some extent.5

Low Rates, Asset Price Inflation & The Lure of Negative Yields: Glimpse

Markets have set their expectations in stone for rates being “lower for longer” due to the inflation dynamics stated above, secular stagnation going forward and maybe even price level targeting by central banks.

In an environment where markets will pounce on anything with a positive real yield, there may be a real risk of financial instability arising from irrational bidding of risk assets which cannot be more prominently observed than from the negative-yielding junk bonds.

You are essentially paying companies with significant credit risk for (the privilege of) borrowing funds from you!

It may sound absurd but what if I tell you that this negative-yielding Japanese/European debt may in certain cases provide you with a dollar yield that is even higher than the positive yield that you get in treasuries? In other words, (for example) -0.1% (¥) > 2.5% ($)

I shall follow up on the mechanics of how this kind of sorcery is possible (along with the risks associated with the same) in part II of this article.

References:

  1. Retrieved from https://stats.oecd.org/glossary/detail.asp?ID=2809
  2. Stansbury, A. M., & Summers, L. H. (2017). Productivity and Pay: Is the link broken? (No. w24165). National Bureau of Economic Research
  3. IMF World Economic Outlook, April 2019
  4. https://www.bis.org/events/ccaresearchconf2018/rigobon_pres.pdf
  5. IMF Blog “Euro Area Inflation: Why Low For So Long?”
Author
Harsh Shivlani
Team Leader– Fixed Income & Derivatives
(M.Sc. Finance, NMIMS – Mumbai. Batch 2018-20)

Connect with Harsh on LinkedIn