Monthly Archives: February 2014

Cap Rate round up/critique

New York City’s Convene was the site for the well attended Cap Rate Data Center conference yesterday. The site was spectacular and I heard from many people that it was a HUGE step up from last year’s venue. Great place to host a 1000 person event in my opinion.

The event was a clear signal to me that the data center business is in flux, and the changes are being compressed as time marches forward and the buildings that house the technology that drives the information age are quickly approaching a crossroads. The conference is primarily for the investor and owner/operator crowd so the event this year (as in virtually every other data center event) spent a lot more time looking backwards over the past 5 years and future/forward looking statements were typically one offs right before the Q&A, yet the impending future is where the substantial risk is and with all of the finance people in the room, I expected more discussion of risk vs. how great the market and industry is while presenting a ton of data to the contrary.

The key takeaways for this attendee were:

  1. Data centers are largely seen as a commodity by brokers, but operators believe they are anything but
  2. Many data centers are 10-15 years old and are at or just overdue on a significant refresh of mechanical & electrical
  3. Modular is still seen as a niche play
  4. Pricing pressure is definitely in the market – too much supply in many markets forces prices down in every market
  5. Secondary and tertiary markets are where the action is and will be

My comments on the takeaways:

  1. The data centers are commoditized by brokers because they do not understand the technology inside them. They do not understand cloud, SSD vs disk, wavelengths, dark vs lit, systems architecture, or virtualization so they commoditize what they do know – buildings and price. I thought it was interesting when Jeff Moerdler from Mintz Levin stated that the negotiation of contracts was being driven more by the CTO than the Real Estate and Facilities groups because the terms were less about leases and more about SLA’s today because of the technology, not the real estate. Brokers need to step up their game or hire people who can talk tech.
  2. There was a fair amount of discussion about the topic of age of facility. It’s important because densities are increasing, the technology in data centers is changing more rapidly than ever, and if you have an older facility you are looking at millions per megawatt to upgrade what you have while you have customers in it and are trying to attract new ones. There is a lot of downside financial risk there both in having to perform the upgrades and stay current/relevant. As the technically savvy companies who were the brass rings of the large data center deals over the past 5 years begin to build their own because it makes sense for them to do that, there will be additional inventory opening up in markets where Google, Yahoo, Microsoft, and Facebook had large chunks of space. That space is hard to re lease with just a fresh coat of paint and a broom clean computer room. It will be cheaper to build new. It will also be cheaper to build without generator and UPS because you deploy in a footprint vs a facility. You wont stub your toe and your heel at the same time.
  3. Modular is still seen as a niche play by the community which is shocking given the history of Blade Room, IO, and even HP, Dell, AST, and others. It’s a niche play for those who don’t understand cloud or computing as a utility yet. In fact Mike Hagan from Schneider read an email from a colleague while he was chairing a panel asking where there was a building to pit containers in because they had demand. The niche went mainstream in front of the jheavy hitters and everyone blinked. Folks – it’s getting technical. Fewer leases, more SLA’s, cloud, peering exchanges, modular – all things that were passed over by the industry years ago are now driving it, or brokers will be seen as a niche.
  4. Pricing pressure accelerates when there is too much competition or something is perceived as a commodity. Data centers are expensive to build, slow to lease up, and there are a ton of them out there. With building new facilities accelerating they are building into a buyers market. I heard that you could do a deal of 250 Kw+ with 5 different providers in Dallas right now for $100/kw. That is as low as I have ever seen. It’s also below what it takes to build and hold these facilities so deals they do they will be underwater on anyway. In older facilities. In a competitive largely commoditized business.
  5. I completely agree with this statement and spent 2012 and 2013 building a business plan that was built around the same premise, and was using modular to keep costs low so that when pricing pressure did accelerate the model would still make industry average margins. The smaller markets mean more, smaller facilities vs a couple of big ones and it also means that you get a single customer to more markets with one logo which is good for them provided there is consistency of product.

So if you are an investor looking to invest in the data center space, you’ll need to cut through the noise and get to where to put your money. If you believe the presenters, big facilities in established markets are where you want to be. Yet if you look at the data of where the majority of growing companies that use data centers as a strategic part of their business want to be, and what is driving those decisions, it’s in newer facilities in more places to service their customers who are using mobile more and more, who need storage backed up in multiple places and don’t want to pay to keep data at the Pentagon when a locked box in a small facility no one knows about is less of a target and just as connected. Look at the data and don’t listen to the hype. Find the next Compass, find the next IO, and do a deal to get em going, and you’ll do more deals to launch them into orbit.

At the end of the day I am glad I took 2 years off from going to these conferences. I missed very little. Same people, same presentations, only the dates on the slides had changed. Just because someone is talking about it doesn’t mean it’s valuable or important.

See you in 2016.

 

Tagged , ,

Will there ever be a realistic SLA?

I read a blog last week that I thought was pretty insightful because it was an actual event that was used as a backdrop of making a point that SLA’s, while an expectation in today’s data center world, still aren’t worth much. The scenario was recent – a data center in Dallas went lights out. Totally lights out, Fortunately there was an SLA, however the business losses and the SLA were worlds apart in value.

As a data center guy I have written, edited, and negotiated maybe a dozen different types. There are SLA’s for power, for network, for apps, for hardware, for every component EXCEPT functionality. Yet when the shit hits the fan, all anyone cares about is that the application works. Not if there was a bad card in a router, or if the OS went rogue with a bad process, or a plug got too hot and set off the VESDA. The translation here is that if the shit hits the fan, the provider has all of the required components covered so that functionality is maintained. The reality is that if YOU the customer/tenant haven’t taken responsibility for architecting the environment to maintain functionality no matter what, then it won’t be. Too many interdependent variables that create an environment for a domino effect to really do some damage. Google still has Gmail outages that cost a lot of ad revenue.

So let’s say that the customer really, really, really, really, needed an SLA that protected them, that was 100%, a real 100% SLA that of anything went wrong they would be compensated for whatever expense an aggravation they incurred as a result of an event that was unplanned or catastrophic, or both. As a provider – and I have been one – I would want the customer who wanted that kind of SLA to declare to me on a weekly basis what the value of the data was, what changes to the systems they made were and how it followed a strict risk mitigation protocol, and declare to me that whatever changes were made they would on;y affect their environment. not anyone else’s. Then I would want access to the environment so that I could perform an audit when they made their declaration, and every month for as long as they were a customer. Why?

Because SLA’s are about risk. If a customer or tenant is asking me to assume more risk than the risk I have designed into my systems, then guess what? Anything that is over the existing risk line I am going to de-risk. I am going to de-risk that customer’s risk they same way I mitigate and de-risk my own environments – understand how the environment is built, how it is supported, how it is tested, and how it is audited. And make sure that it actually is. If all of it checks out and they do things as we would do then that helps keep real and perceived risk in check. We still have the value of the data to determine. That I would leave to a third party to figure out and give me a number, then get an insurance policy for the value of the data with a variable value assigned to it so that as the value of the data goes up, the policy covers the value. No audit, then the customer gets the number of my data insurer and they can have the chat we just had with the underwriter themselves.

So what is the reality in this ‘perfect’ scenario? IT’S NOT REALITY!

It’s not reality because most customers don’t want anyone sniffing or poking around their infrastructure – landlords, auditors, even colleagues. They won’t give anyone access to the data because it’s valuable. But they won’t let anyone else tell them how valuable, and there is no carfax for data or apps, so it’s a customer’s word against the landlord/provider when it comes to the value.

Bottom line here – if you want a 100% SLA, then it’s on you, the customer, to define and quantify what that means. You want a better approach? Execute a better strategy. Have redundant environments. Have facilities in two or more locations, use cloud environments to do it.

And if you have a 100% SLA, cut it up into four inch squares and use it to trim your toilet paper budget for 2014, or use it if the data center does go dark. Because that SLA will typically cover one month of rent, and definitely won’t cover a new pair of boxer shorts that will absolutely need to be replaced…

 

 

 

Tagged , , ,

The Math of Bitcoin – Part II

The earlier blog post had a couple of friends reaching out asking about what bitcoin had to do with the data center, but it’s a hot topic, and should they (as operators) be spending time on chasing the opportunities in that ‘vertical’?

It’s the wild west folks. If you like gunfights, saddle up, pack plenty of ammo and draw faster than the other guy. But realize, yes you’re shooting, but you’re still getting shot at…

Bitcoin is a hot topic. Sure there is ton of negative press – money laundering, being able to buy drugs, bitcoin exchanges imploding, but what is the difference between bitcoin and US Dollars? People and companies launder money and have for years (Scarface?), being able to buy drugs – dealers take 5’s, 10’s and 20’s, exchanges imploding – Lehman Brothers ring a bell?

The bitcoin and cryptocurrency phenomenon fixes a lot of the shenanigans in the financial systems because of how it works. It is distributed, transparent, and lacks central control. It also has quite a steep learning curve attached to it as I have found out in the past couple of months. It is not mature – it has been in existence for only 5 years. So to say that this bitcoin thing is doomed to fail is like saying the automobile will never take off in 1913, five years into the production of the Model T. Prior to the Volkswagen Beetle, it was the most popular production car made.

There are flaws (challenges?) built into how BTC was modeled and what its proposed end state looks like. This is the math of Bitcoin that people don’t talk about much. Math isn’t as sexy and nebulous as buying heroin with crypto currency or exchanges imploding I guess so I don’t see a lot of easily consumable data out there, so this is my quick and dirty follow up…

Right now the amount of BTC that will ever be produced is 21,000,000. That happens in 2033 if things go as expected.  So 19 years from now we will have produced all the BTC that can ever be produced.

BTC equation

There are some interesting variables in the ether that may have a profound impact on the timing and ultimate value of BTC:

1. Hardware – the hardware race is an arms race. ASICs rigs that do the computations that mine (create) BTC are constantly being developed to be faster. They can crunch more numbers and make BTC faster. The regulator on the hardware speed is built in.

2. Complexity – As the hashing power goes up, so does the complexity of the blockchain, which is essentially a receipt for every transaction the BTC has been used in. So as more BTC gets created and used, then the complexity and number of transactions grows adding more numbers that have to be crunched to mine more. This complexity prevents someone from going out and buying 1,000 rigs to control more than half of the BTC mining capabilities and making more BTC and controlling production. The process here is that as time marches on, the complexity goes up, the rigs get less powerful, and you never make more bitcoin than you do your first month, and you won’t cover the cost of the hardware and operations at any point.

3. Costs – the mining rigs are priced right for what they do. That said, their design creates issues for using them at scale. They don’t fit in standard racks and then run 5-10 times hotter than a traditional racks with industry standard servers and corresponding power draws. So you need to add $50 for the rails, and quadruple your power bill.  That’s assuming you can cool the rack… This cost assessment is the reason I believe the exchanges, commercial mining operations and hosters who I could paint as predatory because the math isn’t there to support a profit at any point are doomed. The business model success factor is not one that can be controlled. It is a gamble.

The gamble is whether or not BTC will increase more than the costs to make it. I discussed the business model in the previous post, so I won’t rehash that information but the gist of it is this – if you’re mining BTC you will spend more to make the BTC than its value when you make it. The BTC value may or may not increase to cover your costs, if it does, you’re successful, if not, hopefully you needed the loss anyway…

 

Chop a line for the data center…

I was listening to Buckcherry this morning on my way to a client site, so while the title isn’t too crafty, it’s based on a damn good song from a solid band.

I was glad to see someone else in the data center world blogging about the movement downmarket by data center companies. I was smiling as I was reading Compass Data Center’s blog post ‘Everybody in the Pool‘ because I know they get it and have for a long time – a deal is a deal.

What really makes me smile is the reasons given for the big wholesalers going down market – a new exchange, or because they want to mop up some stranded power. That’s how it starts, and that’s why good coke dealers will front a customer a little first. The chances are that the user will get hooked and keep coming back for more – or in this case the mopping up stranded power becomes a full fledged addiction to higher margin revenue. 

The issue is that the landlord is now competing for the same deals as tenants, and landlords will ALWAYS have the upper hand on price. So now what?

A deal is a deal – the reality is just settling in for those who don’t watch this stuff as closely as we do. If a deal is a deal then a data center is a data center and landlords with the upper hand on price, just kicked their tenants in the nuts. 

 

Tagged , ,

The Math of Bitcoin

Withe recent implosion of MtGox –– A Bitcoin Exchange in Japan – the spotlight is back on Bitcoin. The discussions are ranging from who can hurl the most insults and blame who the most, or hand wringing about what the future holds. I will suggest another way to look at bitcoin (BTC) and focus the discussion on math based reality.

I have looked into BTC and even modeled out a hosting offering for it because as you might not expect, a lot of the BTC miners are 20 somethings with a mining rig plugged into their apartment wall socket. I expect the latest implosion of price will wash a few dozen miners out and they will go do something else, or have an axe to grind with BTC and want to get it all back without changing the fundamentals of their efforts. I will share some of the business modeling I have done (no secret sauce here). These are based on the latest and greatest hardware rigs you can buy – the CoinTerra, TeraMiner IV:

  • A single terra miner has a 2-terrahash per second hash rate and costs $6,000.
  • For $6 million, you could acquire 2-Petahashes with one thousand of these machines.
  • $60 million doubles the entire current network hash rate of roughly 20 petahash per second. (This will probably change as miners wash out)
  • The monthly quantity of all bitcoins mined is 108,000 BTC. Globally, that’s it, every 30 days

So to turn up some hashing capabilities it’s $6M for the rigs. That’s the easy part. Lets look at where you put them because these are not your average pizza boxes…

The rigs are custom built to hash numbers. They draw 2.2Kw per rig. Ten of the rigs fit inside a single 42 U cabinet that are $3,000 a piece, pegging power draw out at 22Kw per rack. That is a TON of heat to have to deal with, and few data centers can. So 1,000 of these rigs is 100 cabinets at 22Kw/cabinet. That means 2.2MW of power for the rigs. Add another 30% for cooling overhead (minimum) and that’s another 660Kw, bringing the total power footprint to 2.66MW. Every month. 

At 5 cents per kwh, that’s 1,941,800 Kwh or $97,090 in power bills per month. Plus $6M in hardware, plus $300,000 in cabinets, plus rent at a facility that can handle the heat at $125/kw so that’s $332,500/month for at least a 3 year term.

Let’s check the costs so far:

Hardware (CapEx): $6M

Rent & power for 36 months (OpEx): $11,970,000 (rent), $3,495,240 (power) Total= $15,465,240

$21.5M to get in the game

Now, I assume you’ll need people to keep things running smoothly, so for a fully burdened team I would allocate $1.2M/year to keep things running smoothly. That’s $25M to launch a mining company.

The gotchas – The first month you mine will be the month that you make the most BTC. Every month the payouts get smaller and the hashrate difficulty goes up, so you need better hardware. All. The. Time. Better hardware to chase diminishing returns. 

Speaking of returns… The chart below was taken just moments before publishing. It shows revenue falling 50% over a few days time from $4.5M to $2.3M. Ouch. Fortunately that tank is spread across the world, but getting cut in half is getting cut in half. 

Image

So what is the point of all of this?

You can make money at this, and the old adage of ‘it takes money to make money’ rings true, but only if the value of money holds up. Giving someone a dollar today that is worth 50 cents by Friday is not a good business model. 

 

Tagged , , , , ,