By Bahar Gidwani
This post is part two of a two part series. Read part one >>
We recently published a chart that shows that the CSR performance of the roughly 5,000 companies in the CSRHub database has been remarkably stable over time. In fact, the average overall rating for the past 16 months has stayed between 47.4 and 48.7 on our 0 to 100 scale. (Note that all ratings mentioned here were estimated using our CSRHub average user profile. The results might be slightly different under different user rating profiles.)
Is this stability real or is it an artifact of the way we generate our ratings? We have checked our approach and believe it is real. However, we also believe ratings groups have a responsibility to be transparent about their processes. Therefore, we will share with you some of the details of our analysis.
First, we wondered if we had no change in our average rating, because there was no change in our individual ratings. If all or almost all of our company scores are the same each month, it would be natural for the average to remain unchanged.
Our system contains 70,200 month-to-month rating change pairs (the rating for a company for one month, compared to its rating for the next month). We found that 42% of the time, a company’s rating changed by at least 0.1 points. If the distribution of changes was randomly distributed (a “normal” curve), it would have a mean of zero and a standard deviation of 1. However, 1.5% of our changes are bigger than three standard deviations (5.2 points) from the mean—much more than the 0.5% we would expect if our the changes had only random noise as a source.
Next we wondered if one stable source somehow dominated the others and suppressed a trend. However, no single source of the 140+ sources we use contributes more than 20% of the data (after weighting adjustments) in our system. Furthermore, we added both sources and data steadily during this period. The change in the percent each source contributed to our overall data should expose this type of flaw.
We do a lot of normalization and adjustment on each new data source we introduce. As a result, a new source is likely to be “centered” and have a neutral effect on our overall ratings. However, when we looked at the data from only from the 80 sources we had at launch, the average ratings curve still remained flat.
Finally, a flat curve could mean that we are just not measuring the real sustainability performance “signal” for the companies we are tracking. However, charts like the one we shared previously on Hewlett Packard and these two on the Bank of China and BP seem to show changes in ratings that track our personal opinion based on news events and sustainability reporting about how the performance of these companies has changed.
(Note the rise especially in governance and employee scores—areas of focus for Chinese companies in general.)
(As expected, there is a steady fall in environment and community scores, as the Gulf spill continues to take its toll.)
Our mission is to give access to broad, stable ratings of sustainability performance to those interested in corporate CSR performance. We appear to have achieved our goal and can only hope that real changes in company performance will eventually drive our average ratings upward.
Bahar Gidwani is a Co-founder and CEO of CSRHub. Formerly, he was the CEO of New York-based Index Stock Imagery, Inc, from 1991 through its sale in 2006. He has built and run large technology-based businesses and has experience building a multi-million visitor web site. Bahar holds a CFA, was a partner at Kidder, Peabody & Co., and worked at McKinsey & Co. Bahar has consulted to both large companies such as Citibank, GE, and Acxiom and a number of smaller software and Web-based companies. He has an MBA (Baker Scholar) from Harvard Business School and a BS in Astronomy and Physics (magna cum laude) from Amherst College. Bahar races sailboats, plays competitive bridge, and is based in New York City.