Cap Rate round up/critique

New York City’s Convene was the site for the well attended Cap Rate Data Center conference yesterday. The site was spectacular and I heard from many people that it was a HUGE step up from last year’s venue. Great place to host a 1000 person event in my opinion.

The event was a clear signal to me that the data center business is in flux, and the changes are being compressed as time marches forward and the buildings that house the technology that drives the information age are quickly approaching a crossroads. The conference is primarily for the investor and owner/operator crowd so the event this year (as in virtually every other data center event) spent a lot more time looking backwards over the past 5 years and future/forward looking statements were typically one offs right before the Q&A, yet the impending future is where the substantial risk is and with all of the finance people in the room, I expected more discussion of risk vs. how great the market and industry is while presenting a ton of data to the contrary.

The key takeaways for this attendee were:

  1. Data centers are largely seen as a commodity by brokers, but operators believe they are anything but
  2. Many data centers are 10-15 years old and are at or just overdue on a significant refresh of mechanical & electrical
  3. Modular is still seen as a niche play
  4. Pricing pressure is definitely in the market – too much supply in many markets forces prices down in every market
  5. Secondary and tertiary markets are where the action is and will be

My comments on the takeaways:

  1. The data centers are commoditized by brokers because they do not understand the technology inside them. They do not understand cloud, SSD vs disk, wavelengths, dark vs lit, systems architecture, or virtualization so they commoditize what they do know – buildings and price. I thought it was interesting when Jeff Moerdler from Mintz Levin stated that the negotiation of contracts was being driven more by the CTO than the Real Estate and Facilities groups because the terms were less about leases and more about SLA’s today because of the technology, not the real estate. Brokers need to step up their game or hire people who can talk tech.
  2. There was a fair amount of discussion about the topic of age of facility. It’s important because densities are increasing, the technology in data centers is changing more rapidly than ever, and if you have an older facility you are looking at millions per megawatt to upgrade what you have while you have customers in it and are trying to attract new ones. There is a lot of downside financial risk there both in having to perform the upgrades and stay current/relevant. As the technically savvy companies who were the brass rings of the large data center deals over the past 5 years begin to build their own because it makes sense for them to do that, there will be additional inventory opening up in markets where Google, Yahoo, Microsoft, and Facebook had large chunks of space. That space is hard to re lease with just a fresh coat of paint and a broom clean computer room. It will be cheaper to build new. It will also be cheaper to build without generator and UPS because you deploy in a footprint vs a facility. You wont stub your toe and your heel at the same time.
  3. Modular is still seen as a niche play by the community which is shocking given the history of Blade Room, IO, and even HP, Dell, AST, and others. It’s a niche play for those who don’t understand cloud or computing as a utility yet. In fact Mike Hagan from Schneider read an email from a colleague while he was chairing a panel asking where there was a building to pit containers in because they had demand. The niche went mainstream in front of the jheavy hitters and everyone blinked. Folks – it’s getting technical. Fewer leases, more SLA’s, cloud, peering exchanges, modular – all things that were passed over by the industry years ago are now driving it, or brokers will be seen as a niche.
  4. Pricing pressure accelerates when there is too much competition or something is perceived as a commodity. Data centers are expensive to build, slow to lease up, and there are a ton of them out there. With building new facilities accelerating they are building into a buyers market. I heard that you could do a deal of 250 Kw+ with 5 different providers in Dallas right now for $100/kw. That is as low as I have ever seen. It’s also below what it takes to build and hold these facilities so deals they do they will be underwater on anyway. In older facilities. In a competitive largely commoditized business.
  5. I completely agree with this statement and spent 2012 and 2013 building a business plan that was built around the same premise, and was using modular to keep costs low so that when pricing pressure did accelerate the model would still make industry average margins. The smaller markets mean more, smaller facilities vs a couple of big ones and it also means that you get a single customer to more markets with one logo which is good for them provided there is consistency of product.

So if you are an investor looking to invest in the data center space, you’ll need to cut through the noise and get to where to put your money. If you believe the presenters, big facilities in established markets are where you want to be. Yet if you look at the data of where the majority of growing companies that use data centers as a strategic part of their business want to be, and what is driving those decisions, it’s in newer facilities in more places to service their customers who are using mobile more and more, who need storage backed up in multiple places and don’t want to pay to keep data at the Pentagon when a locked box in a small facility no one knows about is less of a target and just as connected. Look at the data and don’t listen to the hype. Find the next Compass, find the next IO, and do a deal to get em going, and you’ll do more deals to launch them into orbit.

At the end of the day I am glad I took 2 years off from going to these conferences. I missed very little. Same people, same presentations, only the dates on the slides had changed. Just because someone is talking about it doesn’t mean it’s valuable or important.

See you in 2016.

 

Tagged , ,

Will there ever be a realistic SLA?

I read a blog last week that I thought was pretty insightful because it was an actual event that was used as a backdrop of making a point that SLA’s, while an expectation in today’s data center world, still aren’t worth much. The scenario was recent – a data center in Dallas went lights out. Totally lights out, Fortunately there was an SLA, however the business losses and the SLA were worlds apart in value.

As a data center guy I have written, edited, and negotiated maybe a dozen different types. There are SLA’s for power, for network, for apps, for hardware, for every component EXCEPT functionality. Yet when the shit hits the fan, all anyone cares about is that the application works. Not if there was a bad card in a router, or if the OS went rogue with a bad process, or a plug got too hot and set off the VESDA. The translation here is that if the shit hits the fan, the provider has all of the required components covered so that functionality is maintained. The reality is that if YOU the customer/tenant haven’t taken responsibility for architecting the environment to maintain functionality no matter what, then it won’t be. Too many interdependent variables that create an environment for a domino effect to really do some damage. Google still has Gmail outages that cost a lot of ad revenue.

So let’s say that the customer really, really, really, really, needed an SLA that protected them, that was 100%, a real 100% SLA that of anything went wrong they would be compensated for whatever expense an aggravation they incurred as a result of an event that was unplanned or catastrophic, or both. As a provider – and I have been one – I would want the customer who wanted that kind of SLA to declare to me on a weekly basis what the value of the data was, what changes to the systems they made were and how it followed a strict risk mitigation protocol, and declare to me that whatever changes were made they would on;y affect their environment. not anyone else’s. Then I would want access to the environment so that I could perform an audit when they made their declaration, and every month for as long as they were a customer. Why?

Because SLA’s are about risk. If a customer or tenant is asking me to assume more risk than the risk I have designed into my systems, then guess what? Anything that is over the existing risk line I am going to de-risk. I am going to de-risk that customer’s risk they same way I mitigate and de-risk my own environments – understand how the environment is built, how it is supported, how it is tested, and how it is audited. And make sure that it actually is. If all of it checks out and they do things as we would do then that helps keep real and perceived risk in check. We still have the value of the data to determine. That I would leave to a third party to figure out and give me a number, then get an insurance policy for the value of the data with a variable value assigned to it so that as the value of the data goes up, the policy covers the value. No audit, then the customer gets the number of my data insurer and they can have the chat we just had with the underwriter themselves.

So what is the reality in this ‘perfect’ scenario? IT’S NOT REALITY!

It’s not reality because most customers don’t want anyone sniffing or poking around their infrastructure – landlords, auditors, even colleagues. They won’t give anyone access to the data because it’s valuable. But they won’t let anyone else tell them how valuable, and there is no carfax for data or apps, so it’s a customer’s word against the landlord/provider when it comes to the value.

Bottom line here – if you want a 100% SLA, then it’s on you, the customer, to define and quantify what that means. You want a better approach? Execute a better strategy. Have redundant environments. Have facilities in two or more locations, use cloud environments to do it.

And if you have a 100% SLA, cut it up into four inch squares and use it to trim your toilet paper budget for 2014, or use it if the data center does go dark. Because that SLA will typically cover one month of rent, and definitely won’t cover a new pair of boxer shorts that will absolutely need to be replaced…

 

 

 

Tagged , , ,

The Math of Bitcoin – Part II

The earlier blog post had a couple of friends reaching out asking about what bitcoin had to do with the data center, but it’s a hot topic, and should they (as operators) be spending time on chasing the opportunities in that ‘vertical’?

It’s the wild west folks. If you like gunfights, saddle up, pack plenty of ammo and draw faster than the other guy. But realize, yes you’re shooting, but you’re still getting shot at…

Bitcoin is a hot topic. Sure there is ton of negative press – money laundering, being able to buy drugs, bitcoin exchanges imploding, but what is the difference between bitcoin and US Dollars? People and companies launder money and have for years (Scarface?), being able to buy drugs – dealers take 5’s, 10’s and 20’s, exchanges imploding – Lehman Brothers ring a bell?

The bitcoin and cryptocurrency phenomenon fixes a lot of the shenanigans in the financial systems because of how it works. It is distributed, transparent, and lacks central control. It also has quite a steep learning curve attached to it as I have found out in the past couple of months. It is not mature – it has been in existence for only 5 years. So to say that this bitcoin thing is doomed to fail is like saying the automobile will never take off in 1913, five years into the production of the Model T. Prior to the Volkswagen Beetle, it was the most popular production car made.

There are flaws (challenges?) built into how BTC was modeled and what its proposed end state looks like. This is the math of Bitcoin that people don’t talk about much. Math isn’t as sexy and nebulous as buying heroin with crypto currency or exchanges imploding I guess so I don’t see a lot of easily consumable data out there, so this is my quick and dirty follow up…

Right now the amount of BTC that will ever be produced is 21,000,000. That happens in 2033 if things go as expected.  So 19 years from now we will have produced all the BTC that can ever be produced.

BTC equation

There are some interesting variables in the ether that may have a profound impact on the timing and ultimate value of BTC:

1. Hardware – the hardware race is an arms race. ASICs rigs that do the computations that mine (create) BTC are constantly being developed to be faster. They can crunch more numbers and make BTC faster. The regulator on the hardware speed is built in.

2. Complexity – As the hashing power goes up, so does the complexity of the blockchain, which is essentially a receipt for every transaction the BTC has been used in. So as more BTC gets created and used, then the complexity and number of transactions grows adding more numbers that have to be crunched to mine more. This complexity prevents someone from going out and buying 1,000 rigs to control more than half of the BTC mining capabilities and making more BTC and controlling production. The process here is that as time marches on, the complexity goes up, the rigs get less powerful, and you never make more bitcoin than you do your first month, and you won’t cover the cost of the hardware and operations at any point.

3. Costs – the mining rigs are priced right for what they do. That said, their design creates issues for using them at scale. They don’t fit in standard racks and then run 5-10 times hotter than a traditional racks with industry standard servers and corresponding power draws. So you need to add $50 for the rails, and quadruple your power bill.  That’s assuming you can cool the rack… This cost assessment is the reason I believe the exchanges, commercial mining operations and hosters who I could paint as predatory because the math isn’t there to support a profit at any point are doomed. The business model success factor is not one that can be controlled. It is a gamble.

The gamble is whether or not BTC will increase more than the costs to make it. I discussed the business model in the previous post, so I won’t rehash that information but the gist of it is this – if you’re mining BTC you will spend more to make the BTC than its value when you make it. The BTC value may or may not increase to cover your costs, if it does, you’re successful, if not, hopefully you needed the loss anyway…

 

Chop a line for the data center…

I was listening to Buckcherry this morning on my way to a client site, so while the title isn’t too crafty, it’s based on a damn good song from a solid band.

I was glad to see someone else in the data center world blogging about the movement downmarket by data center companies. I was smiling as I was reading Compass Data Center’s blog post ‘Everybody in the Pool‘ because I know they get it and have for a long time – a deal is a deal.

What really makes me smile is the reasons given for the big wholesalers going down market – a new exchange, or because they want to mop up some stranded power. That’s how it starts, and that’s why good coke dealers will front a customer a little first. The chances are that the user will get hooked and keep coming back for more – or in this case the mopping up stranded power becomes a full fledged addiction to higher margin revenue. 

The issue is that the landlord is now competing for the same deals as tenants, and landlords will ALWAYS have the upper hand on price. So now what?

A deal is a deal – the reality is just settling in for those who don’t watch this stuff as closely as we do. If a deal is a deal then a data center is a data center and landlords with the upper hand on price, just kicked their tenants in the nuts. 

 

Tagged , ,

The Math of Bitcoin

Withe recent implosion of MtGox –– A Bitcoin Exchange in Japan – the spotlight is back on Bitcoin. The discussions are ranging from who can hurl the most insults and blame who the most, or hand wringing about what the future holds. I will suggest another way to look at bitcoin (BTC) and focus the discussion on math based reality.

I have looked into BTC and even modeled out a hosting offering for it because as you might not expect, a lot of the BTC miners are 20 somethings with a mining rig plugged into their apartment wall socket. I expect the latest implosion of price will wash a few dozen miners out and they will go do something else, or have an axe to grind with BTC and want to get it all back without changing the fundamentals of their efforts. I will share some of the business modeling I have done (no secret sauce here). These are based on the latest and greatest hardware rigs you can buy – the CoinTerra, TeraMiner IV:

  • A single terra miner has a 2-terrahash per second hash rate and costs $6,000.
  • For $6 million, you could acquire 2-Petahashes with one thousand of these machines.
  • $60 million doubles the entire current network hash rate of roughly 20 petahash per second. (This will probably change as miners wash out)
  • The monthly quantity of all bitcoins mined is 108,000 BTC. Globally, that’s it, every 30 days

So to turn up some hashing capabilities it’s $6M for the rigs. That’s the easy part. Lets look at where you put them because these are not your average pizza boxes…

The rigs are custom built to hash numbers. They draw 2.2Kw per rig. Ten of the rigs fit inside a single 42 U cabinet that are $3,000 a piece, pegging power draw out at 22Kw per rack. That is a TON of heat to have to deal with, and few data centers can. So 1,000 of these rigs is 100 cabinets at 22Kw/cabinet. That means 2.2MW of power for the rigs. Add another 30% for cooling overhead (minimum) and that’s another 660Kw, bringing the total power footprint to 2.66MW. Every month. 

At 5 cents per kwh, that’s 1,941,800 Kwh or $97,090 in power bills per month. Plus $6M in hardware, plus $300,000 in cabinets, plus rent at a facility that can handle the heat at $125/kw so that’s $332,500/month for at least a 3 year term.

Let’s check the costs so far:

Hardware (CapEx): $6M

Rent & power for 36 months (OpEx): $11,970,000 (rent), $3,495,240 (power) Total= $15,465,240

$21.5M to get in the game

Now, I assume you’ll need people to keep things running smoothly, so for a fully burdened team I would allocate $1.2M/year to keep things running smoothly. That’s $25M to launch a mining company.

The gotchas – The first month you mine will be the month that you make the most BTC. Every month the payouts get smaller and the hashrate difficulty goes up, so you need better hardware. All. The. Time. Better hardware to chase diminishing returns. 

Speaking of returns… The chart below was taken just moments before publishing. It shows revenue falling 50% over a few days time from $4.5M to $2.3M. Ouch. Fortunately that tank is spread across the world, but getting cut in half is getting cut in half. 

Image

So what is the point of all of this?

You can make money at this, and the old adage of ‘it takes money to make money’ rings true, but only if the value of money holds up. Giving someone a dollar today that is worth 50 cents by Friday is not a good business model. 

 

Tagged , , , , ,

The Real Value (Danger?) of OpenX

Having paid very close attention to the Open Compute Summit this week because a client was there (and a member). What I have come to see is that Open Compute IS a good idea, but not for the reasons people want to believe. Let me explain…

Open Compute is good for Facebook. How doe we know? They saved $1.2B implementing Open Compute Hardware. That’s a great stat to be able to share. For Facebook. But what about other companies with say 1,000 employees whose size or revenue is less than $1.2B – what’s in it for them? How does Open Compute solve for their challenges? I for one am still waiting for an answer to THAT question.

I get standards, they have proven useful since before the metric system or the 8″ vs 5 1/4 inch floppy wars, or VHS vs BetaMax format. What Open Compute is working against is an entire industry that has developed and built to standards for 30 years. Power voltages, rack sizes, programming languages. Throwing the baby out with the bathwater is not gonna happen quickly, if at all because there are not enough companies at the size and scale to shave $1.2B off of their costs – and will spend to do it.

That said, the notion of Open Compute that I do like is that it can foster similar approaches in figuring out what DOES work for your company, over whatever period of time makes sense to adapt to standards that work today and will likely work in the future. In other words, you don’t HAVE to be like Facebook but you CAN  work with your vendors to figure out what does make sense, is interoperable, and will likely remain interoperable for your company. You have the standards to augment what isn’t standard and still save money based on what you want to do/spend to fix it. It still may be a solution looking for a problem.

The alarm bells that are going off as a result of seeing where this can end up is astounding. I haven’t quite figured out if they are a new business model, a foregone conclusion, or electronic socialism.

What appears to be happening is that there is a rush to commoditization up and down the stack of what delivers value and makes stuff work. Data centers, servers, storage, operating systems, networking, all going ‘open’. Open Stack, Open Source, Open Compute. I call it OpenX. All these ‘opens’ are telling me that the race to bottom of cost is happening in the name of innovation. Innovation itself is a cost, not revenue. Is R&D in the cost or revenue side of a business? If you’re innovating something that will wind up being free or difficult to charge money for how do you pay people to keep making free stuff. Innovation doesn’t pay the mortgage. Or employees.

 When we get to the bottom, what’s left? Apps. Mobile apps, desktop apps, endpoint consumed apps. I believe it’s why IBM bought Softlayer, IO got into the cloud space, and Microsoft bought Nokia. You can argue about the financial reasons for these deals all day long, but the market force reasons are this – if you don’t have an integrated stack – data centers, servers, storage, operating systems, networking, and cloud – you miss out on being valuable to the only things that will make money – applications. And capturing, washing, and understanding data, but that’s another post.

Think about it – when we commoditize things – wheat, corn, computer chips, the things with the value are food, fuel, and applications. The successful commodity businesses are those that operate at scale, with high risk, and razor thin margins. Look at Lenovo. They just staked their claim in the hardware business. They will make hardware and a ton of it. At scale, low margins, high risk. They will also own enough of the market to reduce the risk of innovation that’s not theirs.

Time will tell how it all shakes out – it always does. Keep your eyes focused on how to make money delivering vale – it’s still the only yardstick that measures how your business is doing. And there is no app for that.

 

 

Tagged , , , , ,

Zooming out

I don’t get sick a lot. I wash my hands often, carry hand sanitizer, eat well, and get 7-8 hours of sleep a night. Not the ‘I got 8 hours of uninterrupted sleep during hell week’ kind of sleep either. Well, last week I got sick – just a cold – first one I have had in a few years. I am a big baby and an awful patient when I get sick and prefer my solitude vs. being fussed over or cared for. It is in this solitude that I get a chance to feel different and by extension think differently.

It hit me after the third Emergen-C cocktail before lunch as the Advil was wearing off that my rants and unwavering belief in modular data centers being the next major evolution in the industry have far less to do with the data center industry that I thought and have led many to believe. Feel uncomfortable=think different.

Modular data centers are about advancing the reality that computing is the next (6th?) utility vs modular data centers being better traditional ones are. I believe the latter is also true by extension of the former, so we’ll focus on the utility of computing.

Utilities are commodities. Cable, phone, power, sewer, and water are all utilities, and by extension, commodities. They operate at scale and are consumed my millions of people worldwide when available in industrialized countries, and delivered inexpensively because of their scale. I will focus on the US for this post because it is where I have spent the bulk of my career.

There are similarities across all of the utilities – electricity, cable, telephone, sewer, and water and it is these similarities that has led me to the belief that computing is the next utility. In fact it’s already here, it’s just not being sold that way.

With electricity, the power company doesn’t care if you plug in a hair dryer, a 40 watt bulb, or weed wacker. It provides the electricity over infrastructure required to support the end points/devices. Cable is the red headed step child because the cable companies DO care what cable box you plug into their cable and they in turn control what you can and see. The similarity here is that they don’t care how may ‘i’ or ‘p’ your tv has – you can watch what you signed up for. Telephone companies don’t care what brand of phone you plug into their wires either – corded, cordless, speakerphone are all ok. Sewer and water don’t care if it’s a toilet or a faucet connected to their lines, and the downside if you get these confused is evident immediately so sewer and water have built in idiot proofing.

Computing uses power and cable or telephone to make another commodity – data – useful. Internet service providers – whether they use cable, copper or fiber – provide networks to our businesses and homes and they don’t care if we have an iPhone, iPad, laptop, or Galaxy tablet connected. The power company delivers the electricity to power the devices used to consume the commodity – data.

Data centers are the new treatment plants, the new substations, the new central offices for this utility of data. There is a subtle shift going on as data center companies understand this and that the business they are in is a commodity business. It wasn’t for the past 30 years – it took that long for us to get where we are, and the changes in the last 5 have been more numerous that the 25 before them. So as this rapid change happens around the data centers the change is happening with the data center industry as the realization settles in that data centers are in a commodity business. Real estate isn’t a commodity, but data centers are. Paying more for a commodity isn’t a successful business model. Supply and demand drive pricing which are the controls in commodity businesses. A freeze in Florida means orange juice prices skyrocket because there are fewer oranges.

The point?

The point is that in the next two to three years data center companies will be retooling to cut costs, the number one cost is electricity by the way. So to cut electricity costs you look at efficiency of the facilities you have and get them dialed in. Expansions happen where there is cheap power, and to get really cheap power you dial in workload to stabilize electricity load. Tenants will look at restructuring leases because they will be more efficient, or at least they should, otherwise they are paying more for their inefficiencies while the landlord improves their efficiency and laughs all the way to the bank.

One thing is certain, the company that understands AND realizes that they are in the utility business will win the commodity game. The winners will shave costs, focus on sameness of product offering, with small nuances of differentiation and deliver ubiquitous proliferation of reliable service in a lot of locations and they will win the game.

Time to get a glass of orange juice and watch what happens…

mark at blunthammer.com

Modular in 2014 – A Vision for You

This is additional coverage of a piece over at Data Center Knowledge about the adoption and competition of modular data centers heating up. Over the past 6-7 years,  as long as we have been in the modular space, we identified many issues with the options, and the biggest issues had little to do with the technology.

The fact that there is M&A activity with Schneider/AST and a pending IPO from IO bode well for putting modular options in the public arena, little has been done to look at the (addressable) challenges that still exist with ANY of the vendors:

1. Focus on product vs. solution. If you talk to the vendors as we have done for 7 years, the positioning is that these are ‘solutions’. They are not, they are products. A solution is a sign & drive experience, not a buy and then figure out where we can put these experience. IO arguably has the most seamless solution out there with their IO anywhere product, assuming you like Singapore, NJ or AZ as data center locations.

2. No place to put them. When the modular products were introduced there was a lot of attention given to the ruggedness of the containers.  They could sit outside in all sorts of climates. That’s great, only people are the ones who make these work so while the containers were rugged, the technicians didn’t like 115 degree heat in direct sunlight, or a Northeaster blowing rain sideways and being in a big box with a lot of electricity coming to it. Even NextFort (we would consider them a hybrid design) enclosed their offering which is largely built with the same materials as a Home Depot. They too realized that people are not as rugged as a product and enclosed their data center. There still is not a modular centric facility out there to support a solution. There are ‘architectures’ and ‘approaches’ but show me a building where we can roll a modular data center that we buy in, plug it in, commission it and move kit in. We want to see one that’s not on paper, we have those too already.

3. Marketed as a specific use case solution vs. platform. We don’t know how many data centers you have been in, but We have been in over 100 and for the most part every single raised floor data center looks the same. If pushed for a number we would say 95% are like that. The biggest difference at the facility is the sizes and shapes of the rooms. White walls (usually) white or off white 24″ x 24″ floor tiles, humming and or blowing sounds from the air conditioners and power distribution equipment and/or computer racks full of servers with fans in them. A container is simply another form factor to put in racks full of servers with fans in them. The biggest difference in data centers is between the airport and the front door, and that’s the stuff that is far more important than look and feel.

4. Lack of (referenceable) information. It’s a royal pain in the ass to find any data to help drive a decision on modular. For a few years no one wanted to say how many they had deployed because the answer was none, one or maybe two at best. The customers didn’t want the notoriety because then they might be called ‘out of their minds’ for using the technology that was not deployed widely and seen as risky. That stink has still not washed completely off.

So here is why we believe (and have believed for 7 years) modular makes sense:

1. They are products, not solutions. You are not locked into a particular vendor’s building, design, density, (poor) efficiency, or lack of support. You want a private cloud? Buy a container, it gives you 20-30 racks that you can run 750Kw to and that you can lock the door of the unit and the door of the container to. Your infrastructure, your container, your control. Blunt Hammer even has a design and a financial model done.

2. You can put them anywhere. Here is where the data center industry can really get interesting… I worked on a business plan for most of 2013 for investors who wanted to see what the opportunity was in modular. In detail. The findings blew many notions we had out of the water. One of the biggest was in costs. We found we could deploy a data center facility for ~$3M per megawatt. We found that we could deliver 20MW in 40,000 square feet along with electrical distribution, DR bunkspace, and a nice office or two. $60M for 20MW and a site PUE of 1.3.  I remember when $11M per MW for a used facility with a 2.8 PUE was a good deal. If I am a traditional facility, I am hoping that someone doesn’t implement this business model because no matter how low I drop my rent, my real estate and infrastructure costs can’t touch this (queue MC Hammer)… The other thing we knew but hadn’t proved was that for all of the retrofits of facilities today (we have looked at breweries, former malls, old KMarts, and office buildings) using a container was faster and 60% cheaper. I would rather have 10 new sites around the US all geographically diverse and synchronized over my own network that cost me less than 3 facilities with a single vendor in a 100 year old building with janitorial staff that can hit an EPO button while dusting and shut me down (this just happened a few weeks ago at a facility).

4. They make an awesome stable, exact same platform. Think about this –  I can have a handful of sites around the US with MY STUFF (private cloud, streaming service, gaming apps, healthcare data) running and stored. Synchronized, mitigation from disaster built into geographic failover, attached to major peering points,  inexpensive to operate. Same skillsets as a data center to run & support. Guess what, If I don’t love them then I recycle them in the next tech refresh or change to a vendor with a better mousetrap and I still cut my computing costs in half on better faster equipment for that period of time. No vendor lock past the end of the hardware lease, and I have a true common platform and piece of equipment for my operations staff to know. It’s the Southwest operations model for data centers, only instead of Boeing 737’s its a container.

So are we glad to see there is more attention in the space? Yes. What we really want is to deploy the next data center platform that is already designed, already modeled, and needs the right investors who understand utility scale.

Blunt Hammer’s vision is to help deliver the next data center platform in more places around the world and operate the utility for computing. The mission is to give the world a data center platform that fixes the issues of inefficiency, cost, and uniqueness that is holding back the ability to compute better.

Please contact us if you want to learn more about what has been done. We have the team, the designs, the locations, and interested customers in hand.

mark AT blunthammer.com

Cloud Velocity – put them on your radar

I am not a paid arm waver for any company, including one I want to get on your radar ASAP – http://www.cloudvelocity.com

Here is why I believe they are important – they orchestrate moving to the cloud, and help keep it orchestrated correctly after you’re there. One HUGE caveat (today) is that they are tied to AWS. If I were someone at Azure, Ajubeo, IBM, or any other cloud company I would be getting into their pilot program this week.

Blunt Hammer would be happy to discuss the practical application of this technology for your company. It’s a game changer.

mark at blunthammer.com

Knowledge or Wisdom?

2013 was pretty good for us here at Blunt Hammer. Towards the end of the year, we went out to validate assessment data we had been performing remotely for a month and a half to insure that things were as advertised. They were.

There were some seemingly minor things that happened on our trip  that point to larger issues within the data center business and could be a reason behind the uptick in our business last year. They deal with credibility.

One of the meetings was with some big wheels at a client of ours. Blunt Hammer was there to validate infrastructure, data, and positioning relative to what was already in place. Too often we see that what is said or presented comes across as being the Gospel truth, when in fact the infrastructure isn’t there, and is a plan (a/k/a dream) to put something in place one a developer has enough information about what they should do. In this case, our client did in fact have the power, the fiber, and the water infrastructure IN PLACE and has two existing data centers from respectable financial services firms in their zip code as well. Clearly not a dream, clearly not something in the planning stages – this client had delivered and had two tenants there already.

You could imagine my surprise when we were discussing what mentions they may have received as a legitimate data center site and they told me that they aren’t on anyone’s radar. I was stunned. I asked if anyone in the data center  brokerage community ‘had reached out or even said anything at all’ and I was told that the city had received a negative mention at a trade show panel over the summer from someone in the data center space. My follow up question  was  ‘When were they here to meet with you to check things out?’ and the response was ‘Never’.

Imagine that! Someone saying something untrue without investigating the facts? No, they weren’t from a network news outlet either – This isn’t broadcast media folks. This is the data center industry where people die and millions are lost if we don’t do our jobs well every single day. What is more upsetting is that someone would be willing to commit credibility suicide by saying something (positive or negative) without doing their homework. Especially when you can get in your car and drive to go check it out for yourself, or in my case fly 1800 miles to make sure that before I say it’s a legitimate site, that it’s a legitimate site. Makes me wonder what else this ‘data center professional’ is telling their clients…

The second ‘minor’ thing that happened was someone passing me an article written by Wired Magazine about ‘Five Reasons Today’s Data Centers are Broken’. The author is Patrick Flynn – who works for IO Data centers – a company we are quite familiar with and genuinely like what they’re doing to evolve the data center. At the end of the day, the article should have been titled ‘Five reasons the data center business is broken’ and taken that approach. Saying that data centers are broken – which the article goes on to define – is like saying cars are broken because there was leaded gasoline in the 1960’s or because whitewalls look stupid. I understand that IO makes data centers so from their perspective saying data centers are broken I get, but Wired not taking an industry wide perspective on how modular fixes things in our opinion makes this a paid marketing placement vs. journalism. Maybe it was, in which case their credibility can now be called into question as a news magazine for tech.

The reasons cited for data centers being broken are:

1. Fear of the unknown

2. Lack of data

3. Complacency

4. Value engineering

5. Snowflake design

Our position is that it is the business that is broken, not the facilities. We could always take the easy way out and be politically correct, or we can can call a spade a spade and say it’s the business that is broken. The way data centers are financed is flawed. The way they are sold/leased  is confusing (we did a study with a third party) and are weighted towards satisfying financial projections pulled together by people whose computer of choice is still an HP. An HP 12C calculator. Ask ANYONE in finance how they actually value a data center deal and you know what the answer is? ‘It depends’. Depends are adult diapers, not decision criteria.

I have spoken to hundreds of finance people in the past 5 years starting various businesses so I can make this statement because I have been through data center modeling many and the result was the same – money would rather continue to believe their own stories, vs do their homework and make decisions based on unchanging criteria they set at the beginning of due diligence. If the criteria changes, it’s either not important or not clear what the objective is. They also trust people who haven’t walked the talk. My new favorite questions are – ‘so who have you talked to there?’ and ‘when did you validate what’s in the listing’. Try asking them yourself.

If you have seen the movie Good Will Hunting, you might remember the scene when Robin William’s character (Sean Maguire) is meeting with Matt Damon’s character (Will Hunting) on a bench at the Boston Public Garden and calls him out on his being ‘just a kid. You don’t have the slightest idea about life. It’s OK, you’ve never been outta Bahstin (Boston)…If I asked you about art, you could give me the skinny on every artist; Michelangelo, you could tell me all about him…Life’s work, political aspirations, him and the Pope, sexual orientation, the whole works, right? I’ll bet you can’t tell me what it smells like in the Sistine chapel.’ BAM! Knowledge vs. Wisdom.

There it is – you can study this industry until you’re blue in the face, read about what everybody else is doing, read 10K’s every quarter, talk to analysts weekly. But can you explain to investors what happens at One Wilshire when a submarine cuts a cable in the Red Sea and what to do in a global network event? Can you explain why two different approaches in design can be $26 million difference in price? Can you tell a client why a data center someplace they don’t have on their list makes sense, and then cost justify it?

We can, because we have. We’ll tell you what it smells like. Just because it’s a Ferrari, doesn’t mean the passenger didn’t throw up in it last night. That said, it’s still a Ferrari and we know how to get the Vodka/RedBull/Burrito smell out.

mark at blunthammer.com