Rob HirschfeldCloud Culture: Level up – You win the game by failing successfully [Collaborative Series 6/8]

Translation: Learn by playing, fail fast, and embrace risk.

This post is #6 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

It's good to failDigital Natives have been trained to learn the rules of the game by just leaping in and trying. They seek out mentors, learn the politics at each

Early failure is the expected process for mastery.

You don’t believe that games lead to better decision making in real life? In a January 2010 article,  magazine reported that observations of the new generation of football players showed they had adapted tactics learned in Madden NFL to the field. It is not just the number of virtual downs played; these players have gained a strategic field-level perspective on the game that was before limited only to coaches. Their experience playing video games has shattered the on-field hierarchy.

For your amusement…Here is a video about L33T versus N00B culture From College Humor “L33Ts don’t date N00Bs.”

Digital Natives embrace iterations and risk as a normal part of the life.

Risk is also a trait we see in entrepreneurial startups. Changing the way we did things before requires you to push the boundaries, try something new, and consistently discard what doesn’t work. In Lean Startup Lessons Learned, Eric Ries built his entire business model around the try-learn-adjust process. He’s shown that iterations don’t just work, they consistently out innovate the competition.

The entire reason Dell grew from a dorm to a multinational company is due to this type of fast-paced, customer-driven interactive learning. You are either creating something revolutionary or you will be quickly phased out of the Information Age. No one stays at the top just because he or she is cash rich anymore. Today’s Information Age company needs to be willing to reinvent itself consistently … and systematically.

Why do you think larger corporations that embrace entrepreneurship within their walls seem to survive through the worst of times and prosper like crazy during the good times?

Gamer have learned that Risk that has purpose will earn you rewards.

Rob HirschfeldCloud Culture: Online Games, the real job training for Digital Natives [Collaborative Series 5/8]

Translation: Why do Digital Natives value collaboration over authority?

Kids Today

This post is #5 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Before we start, we already know that some of you are cynical about what we are suggesting—Video games? Are you serious? But we’re not talking about Ms. Pac-Man. We are talking about deeply complex, rich storytelling, and task-driven games that rely on multiple missions, worldwide player communities, working together on a singular mission.

Leaders in the Cloud Generation not just know this environment, they excel in it.

The next generation of technology decision makers is made up of self-selected masters of the games. They enjoy the flow of learning and solving problems; however, they don’t expect to solve them alone or a single way. Today’s games are not about getting blocks to fall into lines; they are complex and nuanced. Winning is not about reflexes and reaction times; winning is about being adaptive and resourceful.

In these environments, it can look like chaos. Digital workspaces and processes are not random; they are leveraging new-generation skills. In the book Different, Youngme Moon explains how innovations looks crazy when they are first revealed. How is the work getting done? What is the goal here? These are called “results only work environments,” and studies have shown they increase productivity significantly.

Digital Natives reject top-down hierarchy.

These college educated self-starters are not rebels; they just understand that success is about process and dealing with complexity. They don’t need someone to spoon feed them instructions.

Studies at MIT and The London School of Economics have revealed that when high-end results are needed, giving people self-direction, the ability to master complex tasks, and the ability to serve a larger mission outside of themselves will garnish groundbreaking results.

Gaming does not create mind-addled Mountain Dew-addicted unhygienic drone workers. Digital Natives raised on video games are smart, computer savvy, educated, and, believe it or not, resourceful independent thinkers.

Thomas Edison said:

“I didn’t fail 3,000 times. I found 3,000 ways how not to create a light bulb.”

Being comfortable with making mistakes thousands of times ’til mastery sounds counter-intuitive until you realize that is how some of the greatest breakthroughs in science and physics were discovered.  Thomas Edison made 3,000 failed iterations in creating the light bulb.

Level up: You win the game by failing successfully.

Translation: Learn by playing, fail fast, and embrace risk.

Digital Natives have been trained to learn the rules of the game by just leaping in and trying. They seek out mentors, learn the politics at each level, and fail as many times as possible in order to learn how NOT to do something. Think about it this way: You gain more experience when you try and fail quickly then carefully planning every step of your journey. As long as you are willing to make adjustments to your plans, experience always trumps prediction.Just like in life and business, games no longer come with an instruction manual.

In Wii Sports, users learn the basic in-game and figure out the subtlety of the game as they level up. Tom Bissel, in Extra Lives: Why Video Games Matter, explains that the in-game learning model is core to the evolution of video games. Game design involves interactive learning through the game experience; consequently, we’ve trained Digital Natives that success comes from overcoming failure.

Rob HirschfeldTo improve flow, we must view OpenStack community as a Software Factory

This post was sparked by a conversation at OpenStack Atlanta between OpenStack Foundation board members Todd Moore (IBM) and Rob Hirschfeld (Dell/Community).  We share a background in industrial and software process and felt that sharing lean manufacturing translates directly to helping face OpenStack challenges.

While OpenStack has done an amazing job of growing contributors, scale has caused our code flow processes to be bottlenecked at the review stage.  This blocks flow throughout the entire system and presents a significant risk to both stability and feature addition.  Flow failures can ultimately lead to vendor forking.

Fundamentally, Todd and I felt that OpenStack needs to address system flows to build an integrated product.  The post expands on the “hidden influencers” issue and adds an additional challenge because improving flow requires that the community influences better understands the need to optimize work inter-project in a more systematic way.

Let’s start by visualizing the “OpenStack Factory”

Factory Floor

Factory Floor from Alpha Industries Wikipedia page

Imagine all of OpenStack’s 1000s of developers working together in a single giant start-up warehouse.  Each project in its own floor area with appropriate fooz tables, break areas and coffee bars.  It’s easy to visualize clusters of intent developers talking around tables or coding in dark corners while PTLs and TC members dash between groups coordinating work.

Expand the visualization so that we can actually see the code flowing between teams as little colored boxes.  Giving project has a unique color allows us to quickly see dependencies between teams.  Some features are piled up waiting for review inside teams while others are waiting on pallets between projects waiting on needed cross features have not completed.  At release time, we’d be able to see PTLs sorting through stacks of completed boxes to pick which ones were ready to ship.

Watching a factory floor from above is a humbling experience and a key feature of systems thinking enlightenment in both The Phoenix Project and The Goal.  It’s very easy to be caught up in a single project (local optimization) and miss the broader system implications of local choices.

There is a large body of work about Lean Process for Manufacturing

You’ve already visualized OpenStack code creation as a manufacturing floor: it’s a small step to accept that we can use the same proven processes for software and physical manufacturing.

As features move between teams (work centers), it becomes obvious that we’ve created a very highly interlocked sequence of component steps needed to deliver product; unfortunately, we have minimal coordination between the owners of the work centers.  If a feature is needs a critical resource (think programmer) to progress then we rely on the resource to allocate time to the work.  Since that person’s manager may not agree to the priority, we have a conflict between system flow and individual optimization.

That conflict destroys flow in the system.

The number #1 lesson from lean manufacturing is that putting individual optimization over system optimization reduces throughput.  Since our product and people managers are often competitors, we need to work doubly hard to address system concerns.  Worse yet our inventory of work in process and the interdependencies between projects is harder to discern.  Unlike the manufacturing floor, our developers and project leads cannot look down upon it and see the physical work as it progresses from station to station in one single holistic view.  The bottlenecks that throttle the OpenStack workflow are harder to see but we can find them, as can be demonstrated later in this post.

Until we can engage the resource owners in balancing system flow, OpenStack’s throughput will decline as we add resources.  This same principle is at play in the famous aphorism: adding developers makes a late project later.

Is there a solution?

There are lessons from Lean Manufacturing that can be applied

  1. Make quality a priority (expand tests from function to integration)
  2. Ensure integration from station to station (prioritize working together over features)
  3. Make sure that owners of work are coordinating (expose hidden influencers)
  4. Find and mange from the bottleneck (classic Lean says find the bottleneck and improve that)
  5. Create and monitor a system view
  6. Have everyone value finished product, not workstation output

Added Subscript: I highly recommend reading Daniel Berrange’s email about this.

Jason BocheBenQ W1070 and the Universal Ceiling Mount

Over the weekend I hung my first theater projector, the BenQ W1070 1080P 3D Home Theater Projector (White), using the BenQ 5J.J4N10.001 Universal Ceiling Mount, both available of course through While I didn’t expect the installation to be overly complex, I did employ a slow and methodical planning approach before drilling large holes into the new knockdown theater ceiling.

After unboxing the projector and the universal ceiling mount kit, I looked at the instructions, the parts, and the underside of the projector. If you’re reading this, it’s probably because you’re in the same boat I was in – the diagrams don’t closely resemble the configuration of what you’ve got with the W1070. Furthermore, reading some of the reviews on Amazon seems to suggest this universal ceiling mount kit doesn’t work with the W1070 without some modifications to the mounting hardware. I read tales of cutting and filing as well as adding longer bolts, tubing, and washers to compensate for the placement of the mounting holes on the W1070. Not to worry, none of that excess is needed. If you concentrate more on the written instructions rather than the diagrams for mounting the hardware to the projector, it all actually works and fits together as designed with no modifications necessary. The one exception to this is that not all of the parts provided in the kit are used. This perhaps is what leads to some of the initial confusion in the first place. The diagrams suggest a uniform placement of four (4) mounting brackets on the underside of the projector in a ‘cross’ pattern. While this may be the case for some projectors, it’s not at all a representation of the W1070 integration.

For openers, the BenQ W1070 has only three (3) mounting holes meaning only three (3) mounting brackets will be used and not all four (4). Furthermore, the mounting holes are not placed uniformly around the perimeter of the projector. That, combined with the uneven surface of the projector can lead to uncertainty that these products were meant for each other and if so, then how. Simply follow the directions and screw the three brackets into place while allowing a little give so that you can swing the brackets into a correct position. I say _A_ correct position because there are nearly countless positions in which you can configure them and it will still work correctly resulting in a firm mount to the ceiling.

The image below shows an example of how I configured mine:

Next, place the mounting plate on top of the mounting brackets. Slide the mounting screws in the brackets, and gently swing the brackets themselves, so that the screws can extend through one of the channels in the mounting plate. Gently remove the mounting plate and torque the screws attaching the bracket to the projector.

I took some additional steps which may not have been necessary with modern projector technology but nonetheless the methodical approach helps me sleep better at night and reassures me I’m not destroying my ceiling in the wrong spot. I used a felt tip marker to mark a center point on the projector relative to the telescoping pole that will mount to the plate.

I then temporarily removed the mounting plate to measure the telescoping ceiling mount offset relative to the front and center of the projector lens. This measuring translates into the offset for the ceiling mount relative to the center of the room and distance to the projection wall. Performed correctly, it allowed me to mount the front of the lens 10’10″ from the projection wall (sweet spot for my calculated screen size, seating, zoom, etc.) as well as mount the lens exactly in the middle of the room from a side to side lateral perspective.

In closing, the only other thing I’d add here is that if your lag bolts are not hitting studs in the ceiling, don’t bother with the plastic sheetrock inserts. While they may work, I don’t trust them for the amount of money I spent on the projector and I certainly don’t want the projected image wiggling because the projector isn’t firmly mounted to the ceiling. Only one of my lag bolts hit a stud. For the remaining bolts, I went to home depot and purchased some low cost anchor bolts (these are the ones I used along with a fender washer) good for 100lbs. each. Suffice to say, the projector is now firmly hung from the ceiling.

Post from: - VMware Virtualization Evangelist

Copyright (c) 2010 Jason Boche. The contents of this post may not be reproduced or republished on another web page or web site without prior written permission.

BenQ W1070 and the Universal Ceiling Mount

Mark CathcartDecaying Texas

It’s been an interesting month. I live in Austin Texas, boom town USA. Everything is happening in construction, although nothing much in transport. In many ways Austin reminds me of rapidly developing cities in China, India and other developing countries. I’ve travelled some inside Texas, but most on I10 and out East. I’ve tended to dismiss what I’ve seen in small towns, mostly because I figured they were unrepresentative.

Earlier this month I did my first real US roadtrip. I had my Mum with me for a month and figured a week or so out of the heat of Texas would be a good thing. We covered 2,500 miles, most up from North West Texas, also New Mexico, and Colorado. On the way back we went via Taos, Santa Fe, and Roswell and then back through West Texas.

There they were small town after small town, decaying. Every now and again you’d drive through a bigger town that wasn’t as bad, but overall massive decay, mostly in the commercial space. Companies had given up, gone bust, or got run out of town by a Walmart 30-50 files away. Even in the bigger ones, there was really no choice, there were Dollar Stores, Pizza Hut, McDonalds or Burger King, Sonic or Dairy Queen, and gas stations. Really not much else, except maybe a Mexican food stop.

It was only just before sunset on the drive back through West Texas, with my Mum asleep in the backseat, I worked out that my camera and telephoto lens rested perfectly between the steering wheel and the dashboard and I started taking pictures. These are totally representative with what I’ve seen all over Texas. Just like the small towns out near Crockett and Lufkin in East Texas; pretty similar to anything over near Midland; outside El Paso; down south towards Galveston.  Decaying Texas.

Click to view slideshow.

What there were plenty of, in the miles and miles of flat straight roads, were oil derricks, and tankers, hundreds upon hundreds of them. It’s not clear to me what Governor Perry means when he talks about the Texas Miracle, but these small towns, and to some degree, smaller cities have more in common with the towns and cities in China and India, slowly being deserted, run down in the rush to the big cities.

Click to view slideshow.

Interestingly, while writing and previewing this entry on wordpress, it suggested the mybigfatwesttexastrip, which ends with the following

The pictures above tell the story of a dying West Texas town and the changing landscape of population movement away from the agrarian society to the city.

William LearaPCI-SIG Compliance Workshop, Taipei, Taiwan

original announcement:

Dear PCI Developer,

Registration is now open for the PCI-SIG(R) Compliance Workshop #91, which will be held October 28-31, 2014 in Taipei, Taiwan!


The PCI-SIG Compliance Workshop #91 is held to promote PCI Express(R) specification compliance in the industry with the goals of eliminating interoperability issues and ensuring proper implementation of PCI specifications. Participation provides an opportunity to find and fix problems before release. This saves your company time and resources while offering valuable networking and training opportunities with your fellow engineers. Official testing capabilities for Workshop #91 include PCI Express 3.0 and PCI Express 2.0.


Attendance at this members-only event is free. Please note that your credit card information will be collected for product registration(s); however, you will not be charged unless you do not bring your product to the event or your product registration is not cancelled by 12noon Friday, September 26, 2014.

Registration Information and Deadlines

Onsite registration is not available. We do not accept onsite product registrations, so you MUST register your product prior to the registration cut-off date of 12noon PT on Friday, September 26, 2014. Your testing schedule will be created based off of the information you provide for your registered product, so please be sure that any changes to your product’s information are completed prior to 12noon PT on Friday, September 26, 2014. No product detail changes may be made after registration has closed as we will be distributing anonymized testing schedules in advance of the event. Name badges and non-anonymized test schedules will be distributed on Tuesday morning from 8:00-8:45am outside the PCI-SIG Hospitality Suite (Room 401).

In case registration exceeds our testing capacities, we have established a reasonable cap for each product type and revision. If these caps are reached during online registration, we will put any additional products on a waiting list and notify the product registrants. Products will be moved from the waiting list to full registration if possible, based on the order of their attempted pre-registration and will be notified the week of October 6.

System Vendors: System vendors are required to bring a laptop to the workshops for use in their Interoperability test suites with a compatible browser (Chrome or FireFox) for wirelessly submitting Interoperability test results electronically to a Hospitality Suite Server. The wireless application will provide a means for saving the test results to a soft copy PDF file in the gold suites and interoperability test suites. Additionally, a URL will be provided along with login information where testers may view their test results and download a soft copy PDF after the workshop. These results are only available until the next scheduled workshop.

You must register your products and reserve your hotel room before the cut-off dates to confirm your space at the event. Hotel reservations will not be accepted after Monday, October 13, 2014 and registration will close 12noon on Friday, September 26, 2014. All members can register and find additional information online at  

Best Regards,

PCI-SIG Administration
3855 SW 153rd Drive
Beaverton, OR 97003
Phone: (503) 619-0569

Hollis Tibbetts (Ulitzer)To Heck with 'Big Data,' 'Little Data' Is the Problem Most Face

"Big data" gets all the press - but for the vast majority of people who work with data, it's the proliferation of "little data" that impacts us the most. What do I mean by little data? I'm referring to the proliferation of various SaaS and Cloud-based applications, on-premises applications, databases, spreadsheets, log files, data files and so forth. Many organizations are plagued with multiple instances of the same applications or multiple applications from different vendors that do essentially the same thing. These are the applications and data that run today's enterprise - and they're a mess.

read more

William LearaCould This Be The Wrongest Prediction Of All Time?

imageIn yet another fantastic Computer Chronicles episode, Stewart and Gary are this time talking to computer entrepreneurs. The year is 1984.  Among the guests are Gene Amdahl, Adam Osborne, and the co-founder and CEO of Vector Graphic Inc., Lore Harp.

The context is a general discussion about the PC industry, asking where can entrepreneurs successfully innovate, and how is it possible for start-ups to compete with IBM.

Gary’s question to Lore:
I know that you’ve been involved very closely with the whole industry as it’s switched toward IBM hardware; what are your feelings about the PC clones?
…and Lore’s response:
In my opinion, they are not going to have a future …
I don’t think they are going to be a long term solution.
The Computer Chronicles, 1984
Little did she know that IBM would stop being a serious PC competitor within ten years, and would stop selling PCs altogether in twenty.

What fascinates me about this crazy-bad prediction is that she brings up some interesting points, but then manages to come away with the exact wrong conclusion.  Listing her remarks one by one:

1. Clones are not creating any value—putting hardware together and buying software that are available to anyone

That the clone makers were putting together off-the-shelf hardware and software is incontrovertible.  However, the question she should have asked is “why would anyone pay a premium for the same batch of off-the-shelf hardware and software just because it says ‘IBM’ on the front?”  In other words, the off-the-shelfness (I made that word up) of the PC industry was a threat to IBM, not to the clone makers.

2. Clones are not creating anything that makes them proprietary

I guess that was the prevailing business wisdom at the time—you create value by creating something proprietary and lock-in customers to your solution.  What would she think of today’s industry around open source software?

Of course IBM ended up following exactly this strategy themselves—creating a proprietary system:  the PS/2 running OS/2.  The market refused to accept it and to become beholden to one vendor.  In the end, it was actually the PC clone makers lack of proprietary technology that ensured their eventual triumph over IBM.

3. If IBM takes a different turn, software vendors will follow suit, leaving out clone makers

As with her other remarks, this one also turned out to be quite prescient—IBM did indeed take a different turn and created the PS/2 with Micro Channel running OS/2.  But rather than the software vendors following IBM, they abandoned IBM.  Microsoft quit development of OS/2 and bet the company on Windows and Windows NT.  The software industry followed the clone makers, not IBM.

4. Clone makers cannot move as quickly as IBM (?!?!?!) because IBM will have planned their move in advance

What is hilarious about this statement is that of all the myriad things one could say about Big Blue, “moving quickly” is not one of them.  Anyway, as already mentioned, IBM planned their move years in advance and introduced their own proprietary hardware and software system.  The clones moved even quicker and standardized on ISA/EISA and Windows.  The rest is history!

Full episode:

Whatever happened to Lore Harp and Vector Graphic?

William LearaAs the Apple ][ Goes, So Goes the iPhone

With the great success of the iPhone comes many illegal knock-off manufacturers.  Sound familiar?  It should—Stewart Cheifet reported the same thing happening to a previous Apple product, the Apple ][ … in 1983!

Checkout the video clip from a 1983 edition of The Computer Chronicles:

William LearaApple iWatch Revealed! (in 1985)

In another great episode of the Computer Chronicles, Stewart and Gary demonstrate a watch-based computer.  In yet another example of “the more things change, the more they stay the same”, Stewart makes the remark:

Is this another example of technology in search of a purpose?

That is the topic still being debated today, thirty years later:  will the Samsung Galaxy Gear, Pebble watch, or the iWatch have real value, or is it just technology for technology’s sake?  Are people willing to carry 1) a smart phone, 2) a computer or tablet, and 3) wear a watch?  It’s great to see how the “next big thing” today is really just another attempt at what was tried thirty years ago.

Is a wrist-computer worthwhile?  Leave a comment with your thoughts!

Full episode:


Hollis Tibbetts (Integration)Application Proliferation Accelerates - CIOs Unaware of Impending Integration Headaches

The advancement of technology has led to widespread Cloud application usage throughout businesses and corporations. So widespread that IT is largely caught unaware of the impending Integration (not to mention security, backup/recovery, compliance and governance) headaches that result from such rapid proliferation.

Even without this SaaS and Cloud "explosion", organizations already faced a huge challenge integrating all their legacy and on-premises applications and data sources in order to more optimally run, manage and make critical decisions about the business. Over the past decades, enterprises purchased a large numbers of on-premise software packages to improve both the efficiency and effectiveness of their operations - and in most cases created an un-integrated hairball information and process architecture.

Despite the evolution of various application and software platforms, integration architectures and so forth, enterprises still find themselves unable "catch up" with the rapid growth in applications and data sources - and are therefore unable to take full advantage of all their data.

Business Intelligence expert Gaute Solaas, CEO of software vendor iQumulus comments, "The typical enterprise has thousands of data sources and applications, and there is an increasing number of data-producing devices and entities on the horizon. IT isn't prepared to deal with that - businesses need tools to easily and cost-effectively harness this ever-increasing number of disparate data sets - and enable the productive and meaningful presentation of the resultant information to individuals across the organization."

SaaS and Cloud technologies bring tremendous benefits to the organization; however, everything has a downside - these days, anyone with a credit card and $25 to spend can create a new application and data island. No longer does IT need to be involved - or even aware of its creation. And increasingly IT isn't aware - and that's troubling.

In an era where the concept of "instant gratification" is increasingly being applied to applications and data storage (thanks to SaaS and Cloud), increasingly individuals, small groups, departments and line of business owners are swiping their credit cards and getting "instant" business applications - without regard for the downstream consequences - such as Integration, Business Intelligence, security, compliance and backup/recovery (just because someone else hosts your data doesn't mean it's necessarily safe, secure or even backed up. Many organizations face a major financial risk with SaaS and Cloud applications).

In the rush to take advantage of these easy to procure and deploy application, storage and computing solutions, there is a real consequence - the unknown proliferation of cloud silos across the enterprise.
Unfortunately, SaaS and Cloud vendors are largely resistant to incorporating frameworks such as Dell Boomi (and others) that make their products simple to integrate with existing systems.

Jason Haskins, Data Architect at Alchemy Systems, a rapidly growing international company that delivers innovative technologies and services for the global food industry, has to deal with thousands of different data sources as part of his Business Intelligence data architecture. He anticipates the number of disparate sources could easily double in the next 24 months. "Embracing all these different formats and creating a system with a focus on usability, flexibility and scalability is the key to success in this area. It's typically a big mistake for IT to try to force people to restructure their data or to change the way they do business. By bridging the IT and the business world with a flexible and easy to use system, everybody wins."

Don't expect this trend and the integration headaches to slow down - the burgeoning market for Mobile applications will add fuel to this fire. Chris McNabb, General Manager of Dell Boomi commented, "To take competitive advantage of the cloud, companies are desperately looking for ways to accelerate the development of integration flows between their various cloud, on-premises and mobile applications."

Meanwhile, IT continues to be held responsible for many of the implications resulting from this widespread proliferation. Security, governance and compliance are just the tip of the iceberg. Integrating all these disparate systems to automate processes or build effective Business Intelligence systems is another - and online backup and disaster recovery planning is yet another.

A recent study by Netskope validates this app and data explosion - and how IT is being caught unaware. They found that IT experts misjudged Cloud application usage within their companies by as much as 90%. In the Netskope report, IT professionals estimated that their company only used 40 to 50 applications. The actual number: nearly 400 Cloud applications. And this is in addition to the hundreds to thousands of disparate and often distributed on-premises "legacy" systems in most organizations.

Mark CathcartDell PowerEdge 13g Servers with NFC

Although I have not worked in the server group at Dell for almost 3-years, I was delighted to see in among the innovations announced at yesterdays PowerEdge 13g launch, the Near Field Communications (NFC) concept and prototype I proposed just over 2-years ago.

The enhanced at-the-server management, and from anywhere: Dell introduces iDRAC Quick Sync, using Near Field Communication (NFC), an industry first. And is one example of many that belies the notion, commonly held, that Dell doesn’t innovate.

For customers managing at-the-box, this new capability transmits server health information and basic server setup via a hand-held smart device running OpenManage Mobile, simply by tapping it at the server. OpenManage Mobile also enables administrators to monitor and manage their environments anytime, anywhere with their mobile device.

Mark CathcartLet’s Go do rail like Houston!

Mark Cathcart:

Fantastic write-up on the mechanics and rights and wrongs of Prop-1. Just vote NO. I can’t vote until 2016, make your vote count for both of us.

Originally posted on Keep Austin Wonky:

Advocates for this November’s ‘road and rail’ Proposition 1 would like the electorate to believe the proposed light rail segment will achieve success similar to Houston’s stellar Red Line. Here are the top 3 reasons why they are wrong and why it matters.


Source: National Transit Database. “UPT” means unlinked passenger trip (i.e. boarding). Median values for a category in bold.


View original 792 more words

Gina MinksWhat does the death of Twitter mean to online enterprise tech communities?

You probably have heard about the changes Twitter is planning so the timeline can be more “user friendly“. Twitter wants to take the noise out of your timeline, by determining what you should see, much like Facebook does. I think this marks the end of an era. And I’m not alone. In the blog post something is rotten in the state of…Twitter  @bonstewart discusses the ways social is just not what it used to be (the article is

read more here

Kevin HoustonIntroducing the Cisco UCS B200 M4 Blade Server

With today’s announcement of the Intel E5-2600 v3, Cisco announced the UCS B200 M4 Blade Server.  Here’s a quick overview of it.

The 4th generation of the Cisco UCS B200 will offer the following:

CIsco UCS B200 M4 Blade Server

  • Up to 2 x Intel Xeon E5-2600 v3 CPUs
  • 24 DIMM of DDR4 memory delivering speeds up to 2133MHz and maximum capacities of 768GB
  • 2 x hot plug HDD or SSDs
  • Dual SDHC flash card sockets (aka Cisco FlexFlash)
  • Cisco UCS Virtual Interface Card (VIC) 1340: a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) mezzanine adapter.

Model specifics have not been provided at this time, but Cisco has released a datasheet which you can find here.



Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Kevin HoustonHP Announces the BladeSystem BL460c Gen 9

Today HP announced their next 2 socket blade server based on the Intel Xeon E5-2600 v3 CPU, the BL460c Gen 9.   Here’s a quick summary of it.

  • UHP Proliant BL460 Gen9p to 2 x Intel Xeon E5-2600 v3 (up to 18 cores per CPU).
  • 16 x DDR4 DIMM slots providing up to 512GB of RAM
  • Support for up to 2 x 12Gb/s SAS HDD or SSD

I wish I could provide more information, but unfortunately HP didn’t market this new product beyond this blog post so I don’t have any additional details including when it will be available to order.  As I get them, I’ll update this blog post, so check back in the future.



Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Kevin HoustonIntel Announces the Xeon E5-2600 v3 CPU

Today Intel announced the next generation of their x86 CPU,  the Xeon E5-2600 v3.  The specific CPU models being offered vary by server vendor, so here’s a summary of what the new CPU will provide.

Summary of the Intel E5-2600 v3 CPU:

  1. Increase in CPU Cores – up to 18 cores with additional offerings of 16, 14, 12, 10, 8, 6 and 4 cores.
  2. Increase in Shared Cache – up to 45MB of Lower Level (LL) Cache
  3. Increase in QPI Speed – up to 9.6GT/s
  4. New DDR4 Memory – 4 x DDR4 channels supporting 32GB DIMMs (64GB in future); max of 2133 MHz

Intel Xeon E5-2600 v3 CPU Overview













For more details on the Intel Xeon E5-2600 v3, check out this great write up on



Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.






William LearaUSB 3.1 Developer Days, Berlin, Germany

original announcement:

The USB 3.1 Specification adds a SuperSpeed USB 10Gbps speed mode that uses a more efficient data encoding and will deliver more than twice the effective data through-put performance of existing SuperSpeed USB over enhanced, fully backward compatible USB connectors and cable. The specification extends the existing SuperSpeed mechanical, electrical, protocol and hub definition while maintaining compatibility with existing USB 3.0 software stacks and device class protocols as well as with existing 5Gbps hubs and devices and USB 2.0 products.

The USB Type-C Cable and Connector Specification defines a new USB connector solution that extends the existing set of cables and connectors to enable emerging platform designs where size, performance and user flexibility are increasingly more critical. The specification covers all of the mechanical and electrical requirements for the new connector and cables. Additionally, it covers the functional requirements that enable this new solution to be reversible both in plug orientation and cable direction, and to support functional extensions that designers are looking for in order to enable single-connector platform designs.

The USB Power Delivery Specification defines the use of a sideband communications method used between two connected USB products to discover, configure and manage power delivered across VBUS between USB products with control over power delivery direction, voltage (up to 20V) and current (up to 5A). The USB Power Delivery 2.0 update adds a new communications physical layer that is specific to the USB Type-C cable and connector solution. The specification also extends the definition of Structured Vendor Defined Messages (VDMs) to enable the functional extensions that are possible with the USB Type-C solution.

What:  USB 3.1 Developers Days is an opportunity to review these specifications and engage with experts in a face-to-face setting
When:  The conference will be held October 1-2, 2014
Cost:  Members US $475.00
Non-members US $960.00    
Registration will close on Monday, September 22 at 5:00PM US Pacific Time. All attendees MUST be pre-registered as on-site registration will not be available.

Agenda (subject to change):
Day 1:  USB 3.1 (featuring the new USB Type-C connector)

- Registration check-in
- Introduction
- USB 3.1 Architectural Overview
- USB 3.1 Physical and Link Layers
- USB Type-C Functional Requirements
- USB 3.1 Protocol Layer
- USB 3.1 Hub- USB 3.1 Compliance

Day 2: Track One
- USB Cables and Connectors (including USB Type-C)
     * Overview
     * USB Type-C Mechanical requirements and compliance
     * USB Type-C Electrical/EMC requirements and compliance
- USB 3.1 System Design
     * USB 3.1 design and interoperability goals, and design envelope (EQ capability, channel loss budget)
     * System simulation:  reference channels and reference equalizers
     * Key system performance metrics and design trade-offs
     * Design recommendations and trade-offs for package and PCB designs
     * Silicon design considerations, including equalizers and system margining
     * Re-timing repeater design requirements
     * Design to minimize EMI & RFI

Day 2 :  Track Two
- USB Power Delivery 2.0
     * Introduction and Architectural Overview
     * Electrical/Physical Layer
     * Protocol Layer
     * Protocol Extensions (specific to USB Type-C)
     * Device and System Policy
     * Power Supply
     * Compliance

Where:  Sofitel Berlin Kurfürstendamm
            Augsburger Strasse 4110789
            10789 - Berlin

Tel.: (+49) 30 800 9990
Fax: (+49) 30 800 99999

Hotel Accommodations
The group room block is at the Sofitel Berlin Kurfürstendamm. To receive the group sleeping room rate of EUR 145 per night (single occupancy, includes tax, breakfast and guestroom internet) attendees should make their reservations by completing the Hotel Reservation Form and submitting it directly to the hotel via fax or email. A double occupancy rate of EUR 165 is also available. The reservation deadline is Monday, September 15, 2014. Reservations received after September 15th are subject to availability and room type and will be offered at the group rate based on availability only. 

A major credit card is needed to guarantee guestroom reservations. Any reservation cancellations should be made by September 25th to avoid cancellation penalties. If the room is cancelled after this date or is not checked in on the day of arrival, the hotel will charge 100% of the agreed room rate for the entire stay to the credit card on file.

Hotel check-in time is 3:00pm. Check-out time is 12:00pm.  Early check-in and late check-out are subject to availability. 

Hotel:  Sofitel Berlin Kurfürstendamm
Cut-Off Date:  Monday, September 15, 2014
Group Rate:  EUR 145 per night

Kevin HoustonA First Look at the Dell PowerEdge M630

The PowerEdge M630, Dell’s newest blade server based on the Intel Xeon E5-2600 v3 was announced today.  Although specifics haven’t been officially posted on Dell’s website, a video releasing some highlights of the newest member to the PowerEdge family was found on YouTube by Gartner Analyst,  @Daniel_Bowers, so here is a quick look at it.

M630 with 4 - 1.8 SSDThe PowerEdge M630 is a half-height blade server with up to 2 x Intel Xeon E5-2600 v3 CPUs (up to 36 cores), 24 DDR4 DIMMs, up to 4 x 10GbE CNA ports plus support for up to 2 additional  I/O mezzanine expansion cards (up to 8 x 10GbE total ports).  Best of all is the “4 drive configuration” as shown in the image to the left.  More details on that when it becomes available…

UPDATED: The newest addition to this blade server is the use of 1.8″ Solid State Drives (SSDs) offering high performance at an affordable price point. Dell has not published the available drive sizes, but as they become available, I’ll publish them here.

Check out the full video on the Dell PowerEdge M630 Blade Server here.

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of He has over 17 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Kevin HoustonA Look at the Cisco UCS M-Series

On September 4th, Cisco released a new line of modular servers under the UCS family known at the M-Series.  Interesting enough, though, Cisco’s not calling the new servers “blade servers” but instead they are taking a play out of HP’s Moonshot playbook and calling them “cartridges.”   The M-Series won’t be available until Q4 of this year, but in this blog post, I’ll highlight the information Cisco has provided.
Cisco is taking a very unique approach with the UCS M-Series.  Veering away from the tradition server model of each server having its own NIC and RAID controller, the servers in the M-Series are “disaggregated” and share a NIC and Storage.  Although this platform is ideal for nearly any single-threaded application, Cisco appears to be targeting the M-Series for “Cloud-Scale Applications.”

M4308 Chassis

M-Series-M4308-Chassis-Front-1024x500The chassis for the new M-Series is known as the M4308 and is a 2U form factor that holds 8 x 1/4 width server M142 server cartridges – more on these below.  As you can see in the image, the front of the chassis is not very complex.  On the left side is a series of LEDs that give basic information on the chassis such as if it has power, if there are any alerts and if there is network connectivity.  On the right side you’ll notice LEDs with the numbers 1 – 8 signifying the cartridges, most likely confirming they are connected and powered on.

Cisco M4308 chassis - rearThe rear of the chassis houses the 4 x SSD drive bays (choice of SAS or SATA drives with capacities ranging from 240 GB to 1.6 TB per disk) that are connected to a single 12G modular RAID controller with 2-GB flash-backed write cache (FBWC).  The chassis shared 2 x 1400 W power supplies and has 2 x 40GbE uplinks.  From what I can understand, these 40GbE links connect the single internal Virtual Interface Card that is shared across each of the server cartridges (which equates to 5GbE per server.)  In the image of the rear of the chassis on the left side is what appears to be a PCIe port that could be shared across the blades server cartridges, however nothing was mentioned in the blog or data sheets, so that slot’s use is unclear.  One thing they did mention in the Cisco blog is that the design of sharing RAID and NICs is performed through something called UCS System Link Technology – a silicon-based technology  that gives M-Series the ability to connect these disaggregated subsystems via a UCS System Link fabric and create a truly composable infrastructure. Based on details from the data sheet, the 40GbE uplinks will connect directly into the UCS 6200 Fabric Interconnect, and up to 20 M4308 chassis can be connected in a single domain.  Hopefully Cisco will reveal more about this technology as it gets closer to availability in Q4.

M142 Server Cartridge

Cisco M142 CartridgeThe Cisco UCS M-Series sesrvers are nothing like the UCS B-Series blade servers, which is perhaps why Cisco is calling them “cartridges”.  A single cartridge actually holds 2 servers each with 1 x Intel E3 CPU, and 4 x 8GB DDR3, 1600MHz DIMMs.  The Intel E3 CPU speeds being offered are:

  • Intel® Xeon® processor E3-1275L v3 (8-MB cache, 2.7 GHz), 4 cores, and 45W
  • Intel® Xeon® processor E3-1240L v3 (8-MB cache, 2.0 GHz), 4 cores, and 25W
  • Intel® Xeon® processor E3-1220L v3 (4-MB cache, 1.1 GHz), 2 cores, and 13W

A quick observation – if you multiply 45W x 16 compute nodes, you come out to 720W.  As mentioned above, the chassis has 2 x 1400W redundant power supplies, so this leaves 700+W for the VIC and RAID – or is this a preview into what Cisco’s next cartridge might require?

For more information on the Cisco UCS M-Series, visit Cisco’s website.



Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.





Hollis Tibbetts (Ulitzer)CIO Shocker: #Cloud and SaaS Surprise Awaits

The advancement of technology has led to widespread Cloud data and SaaS application usage throughout enterprises. And CIOs are unprepared for the (mostly unwelcome) implications - largely unaware of the "SaaS Sprawl" in their organizations. These Cloud applications are available for just about every role in a company - from human resources to marketing, there's an app for that. And odds are, someone in your organization is using it - most likely without IT knowing. As app (primarily SaaS and Cloud) use within organizations continues to spread and accelerate, IT professionals are largely unaware of the massive scale of Cloud application utilization. However, IT continues to be held responsible for many of the implications resulting from this widespread proliferation.

read more

Mark CathcartRail isn’t about Congestion

It's not going to fix congestion.

It’s not going to fix congestion.

Prop.1 on the Austin November ballot is an attempt to fund the largest single bond in Austin history, almost half the $1 billion going to the light rail proposal.

Finally people seem to be getting the fact that the light rail, if funded, won’t help with the existing traffic. KUT had a good review of this yesterday, the comments also some useful links. You can listen to the segment here: Is a Light Rail Line Going to Solve Austin’s Traffic Problems?

Jace Deloney, makes some good points, what no one is saying though, and what I believe is the real reason behind the current proposal. There is a real opportunity to develop a corridor of key central Austin and, some unused and many underused land, West of I35, and from Airport all the down to Riverside Dr.

This is hugely valuable land, but encouraging development would be a massive risk, purely because of existing congestion. Getting more people to/from buildings in that corridor, by car, or even bus, into more dense residential accommodation, a medical school, UT Expansion or re-site, more office, whatever, will be untenable in terms of both west/east and south/north congestion. So the only way this could really work, is to make a rail corridor, with stations adjacent the buildings.

The Guadalupe/Lamar route favored by myself and other rail advocates wouldn’t add almost any value to that new corridor. It’s debatable that it would eliminate congestion on the west side of town either. But with a rail transit priority system, the new toll lanes on Mopac, the ability to get around at peak times, and the elimination of a significant number of cars in the central west, and downtown areas would make it worth the investment.

Voters need to remember this when considering which way to vote in November. If the city, UT, and developers want to develop that corridor, they should find some way of funding rail from those that will directly benefit. City wide economic impact; new tax revenues, new jobs is a slight of hand, a misdirection.

It’s not acceptable to load the cost onto existing residents for little benefit, just so you can developers can have their way.

William LearaFall 2014 UEFI Plugfest

The UEFI Testing Work Group (UTWG) and the UEFI Industry Communications Work Group (ICWG) from the Unified EFI (UEFI) Forum invite you to the upcoming UEFI Plugfest being held October 13-17, 2014 in Taipei, Taiwan.clip_image001

If you require formal invitation documents for Visa application/traveling purposes, please contact Tina Hsiao for more information.

UEFI membership is required to attend UEFI Testing Events & Workshops. If you are not yet a UEFI member, please visit to learn about obtaining UEFI membership.

Please stay tuned for updates regarding the Fall 2014 UEFI Plugfest. Registration and other logistical information will be provided very soon.


Event Contact

Tina Hsiao, Insyde Software

Phone: (02) 6608-3688 Ex: 1599


Rob HirschfeldVMware Integrated OpenStack (VIO) is smart move, it’s like using a Volvo to tow your ski boat

I’m impressed with VMware’s VIO (beta) play and believe it will have a meaningful positive impact in the OpenStack ecosystem.  In the short-term, it paradoxically both helps enterprises stay on VMware and accelerates adoption of OpenStack.  The long term benefit to VMware is less clear.

From VWVortex

Sure, you can use a Volvo to tow a boat

Why do I think it’s good tactics?  Let’s explore an analogy….

My kids think owning a boat will be super fun with images of ski parties and lazy days drifting at anchor with PG13 umbrella drinks; however, I’ve got concerns about maintenance, cost and how much we’d really use it.  The problem is not the boat: it’s all of the stuff that goes along with ownership.  In addition to the boat, I’d need a trailer, a new car to pull the boat and driveway upgrades for parking.  Looking at that, the boat’s the easiest part of the story.

The smart move for me is to rent a boat and trailer for a few months to test my kids interest.  In that case, I’m going to be towing the boat using my Volvo instead of going “all in” and buying that new Ferd 15000 (you know you want it).  As a compromise, I’ll install a hitch in my trusty sedan and use it gently to tow the boat.  It’s not ideal and causes extra wear to the transmission but it’s a very low risk way to explore the boat owning life style.

Enterprise IT already has the Volvo (VMware vCenter) and likely sees calls for OpenStack as the illusion of cool ski parties without regard for the realities of owning the boat.  Pulling the boat for a while (using OpenStack on VMware) makes a lot of sense to these users.  If the boat gets used then they will buy the truck and accessories (move off VMware).  Until then, their still learning about the open source boating life style.

Putting open source concerns aside.  This helps VMware lead the OpenStack play for enterprises but may ultimately backfire if they have not setup their long game to keep the customers.

William LearaMy Favorite Obituary

Okay, I know it’s a bizarre title, but bear with me.  Mr. Tom Halfhill, a computer journalist I grew up reading in COMPUTE! magazine, wrote the following “obituary” upon the death of Commodore.  If you’re like me and grew up with a Commodore 64 computer, I think you will find it a poignant tribute.  (have tissues nearby…)

Beautifully written, thoughtful and accurate, this “obituary” best tells the story of Commodore and expresses the spirit of the early personal computer era.

R.I.P. Commodore 1954-1994

A look at an innovative computer industry pioneer, whose achievements have been largely forgotten

Tom R. Halfhill

Obituaries customarily focus on the deceased’s accomplishments, not the unpleasant details of the demise. That’s especially true when the demise hints strongly of self-neglect tantamount to suicide, and nobody can find a note that offers some final explanation.

There will be no such note from Commodore, and it would take a book to explain why this once-great computer company lies cold on its deathbed. But Commodore deserves a eulogy, because its role as an industry pioneer has been largely forgotten or ignored by revisionist historians who claim that everything started with Apple or IBM. Commodore’s passing also recalls an era when conformity to standards wasn’t the yardstick by which all innovation was measured.

In the 1970s and early 1980s, when Commodore peaked as a billion-dollar company, the young computer industry wasn’t dominated by standards that dictated design parameters. Engineers had much more latitude to explore new directions. Users tended to be hobbyists who prized the latest technology over backward compatibility. As a result, the market tolerated a wild proliferation of computers based on many different processors, architectures, and operating systems.

Commodore was at the forefront of this revolution. In 1977, the first three consumer-ready personal computers appeared: the Apple II, the Tandy TRS-80, and the Commodore PET (Personal Electronic Transactor). Chuck Peddle, who designed the PET, isn’t as famous as Steve Wozniak and Steve Jobs, the founders of Apple. But his distinctive computer with a built-in monitor, tape drive, and trapezoidal case was a bargain at $795. It established Commodore as a major player.

The soul of Commodore was Jack Tramiel, an Auschwitz survivor who founded the company as a typewriter-repair service in 1954. Tramiel was an aggressive businessman who did not shy away from price wars with unwary competitors. His slogan was “computers for the masses, not the classes.”

In what may be Commodore’s most lasting legacy, Tramiel drove his engineers to make computers that anyone could afford. This was years before PC clones arrived. More than anyone else, Tramiel is responsible for our expectation that computer technology should keep getting cheaper and better. While shortsighted critics kept asking what these machines were good for, Commodore introduced millions of people to personal computing. Today, I keep running into those earliest adopters at leading technology companies.

Commodore’s VIC-20, introduced in 1981, was the first color computer that cost under $300. VIC-20 production hit 9000 units per day—a run rate that’s enviable now, and was phenomenal back then. Next came the Commodore 64 (1982), almost certainly the best-selling computer model of all time. Ex-Commodorian Andy Finkel estimates that sales totaled between 17 and 22 million units. That’s more than all the Macs put together, and it dwarfs IBM’s top-selling systems, the PC and the AT.

Commodore made significant technological contributions as well. The 64 was the first computer with a synthesizer chip (the Sound Interface Device, designed by Bob Yannes). The SX-64 (1983) was the first color portable, and the Plus/4 (1984) had integrated software in ROM.

But Commodore’s high point was the Amiga 1000 (1985). The Amiga was so far ahead of its time that almost nobody—including Commodore’s marketing department—could fully articulate what it was all about. Today, it’s obvious the Amiga was the first multimedia computer, but in those days it was derided as a game machine because few people grasped the importance of advanced graphics, sound, and video. Nine years later, vendors are still struggling to make systems that work like 1985 Amigas.

At a time when PC users thought 16-color EGA was hot stuff, the Amiga could display 4096 colors and had custom chips for accelerated video. It had built-in video outputs for TVs and VCRs, still a pricey option on most of today’s systems. It had four-voice, sampled stereo sound and was the first computer with built-in speech synthesis and text-to-speech conversion. And it’s still the only system that can display multiple screens at different resolutions on a single monitor.

Even more amazing was the Amiga's operating system, which was designed by Carl Sassenrath. From the outset, it had preemptive multitasking, messaging, scripting, a GUI, and multitasking command-line consoles. Today’s Windows and Mac users are still waiting for some of those features. On top of that, it ran on a $1200 machine with only 256 KB of RAM.

We may never see another breakthrough computer like the Amiga. I value my software investment as much as anyone, but I realize it comes at a price. Technology that breaks clean with the past is increasingly rare, and rogue companies like Commodore that thrived in the frontier days just don’t seem to fit anymore.

My Thoughts

But Commodore deserves a eulogy, because its role as an industry pioneer has been largely forgotten or ignored by revisionist historians who claim that everything started with Apple or IBM.
This is so true.  Especially with the return of Steve Jobs to Apple and that company’s resurgence, people have the following idea of computer history:  Apple invented the personal computer, then IBM and Microsoft unfairly took it over.  That’s ridiculous—in fact, the Commodore PET was launched before the Apple ][.  The TRS-80 was the early PC market leader by virtue of Radio Shack having a nation-wide distribution system in place.  Commodore took over market leadership with the introduction of the VIC-20.  It wasn’t until VisiCalc was released on the Apple ][ (by dumb luck) that Apple caught a break and became a significant company.
The 64 was the first computer with a synthesizer chip (the Sound Interface Device, designed by Bob Yannes). The SX-64 (1983) was the first color portable, and the Plus/4 (1984) had integrated software in ROM.
This reminds me of a comment Steve Wozniak made at the 25th Anniversary of the Commodore 64, a celebration hosted by the Computer History Museum.  He criticized the C64 as not being expandable.  First of all, that’s just plain wrong.  The C64 was just as expandable as an Apple ][, it just used serial, parallel, cassette, and an external expansion port to do it, rather than the internal expansion slot approach used by Apple and others.  But anyway, my main point is that the C64 didn’t have to be so expandable, since, unlike the Apple ][, so much was already built in!  Like the SID sound chip—Apple ][ owners had to buy a separate expansion card; C64 owners had four voice sound for free.  The basic Apple ][e was monochrome—the C64 gave you color for free.

In Closing

Ironically, when Mr. Halfhill says “…and it would take a book to explain why this once-great computer company lies cold on its deathbed”, someone did, and I highly recommend the book!:
Long live the Commodore 64!

William LearaDMTF Webinars Now Available On-Demand

The Desktop Management Task Force (DMTF) produces standards of great interest to BIOS developers.  (e.g., SMBIOS) Did you know that DMTF webinars are now available online for on-demand viewing?

There are currently 20+ talks mainly covering virtualization, storage, cloud computing, and the management of these technologies.  See:

Note:  Viewing requires the user to register with BrightTALK.  It’s quick and painless and does not cost anything.

Rob HirschfeldOpenStack DefCore Process Flow: Community Feedback Cycles for Core [6 points + chart]

If you’ve been following my DefCore posts, then you already know that DefCore is an OpenStack Foundation Board managed process “that sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack™ products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled OpenStack™.”

In this post, I’m going to be very specific about what we think “community resources and involvement” entails.

The draft process flow chart was provided to the Board at our OSCON meeting without additional review.  It below boils down to a few key points:

  1. We are using the documents in the Gerrit review process to ensure that we work within the community processes.
  2. Going forward, we want to rely on the technical leadership to create, cluster and describe capabilities.  DefCore bootstrapped this process for Havana.  Further, Capabilities are defined by tests in Tempest so test coverage gaps (like Keystone v2) translate into Core gaps.
  3. We are investing in data driven and community involved feedback (via Refstack) to engage the largest possible base for core decisions.
  4. There is a “safety valve” for vendors to deal with test scenarios that are difficult to recreate in the field.
  5. The Board is responsible for approving the final artifacts based on the recommendations.  By having a transparent process, community input is expected in advance of that approval.
  6. The process is time sensitive.  There’s a need for the Board to produce Core definition in a timely way after each release and then feed that into the next one.  Ideally, the definitions will be approved at the Board meeting immediately following the release.

DefCore Process Draft

Process shows how the key components: designated sections and capabilities start from the previous release’s version and the DefCore committee manages the update process.  Community input is a vital part of the cycle.  This is especially true for identifying actual use of the capabilities through the Refstack data collection site.

  • Blue is for Board activities
  • Yellow is or user/vendor community activities
  • Green is for technical community activities
  • White is for process artifacts

This process is very much in draft form and any input or discussion is welcome!  I expect DefCore to take up formal review of the process in October.

Rob HirschfeldCloud Culture: No spacesuits, Authority comes from doing, not altitude [Collaborative Series 4/8]

Subtitle: Why flattening org charts boosts your credibility

This post is #4 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Unlike other generations, Digital Natives believe that expertise comes directly from doing, not from position or education. This is not hubris; it’s a reflection both their computer experience and dramatic improvements in technology usability.

AstronautIf you follow Joel Spolsky’s blog, “Joel on Software,” you know about a term he uses when describing information architects obsessed with the abstract and not the details; Architecture Astronauts—so high up above the problem that they might as well be in space. “They’re astronauts because they are above the oxygen level, I don’t know how they’re breathing.”

For example, a Digital Native is much better positioned to fly a military attack drone than a Digital Immigrant. According to New Scientist, March 27, 2008, the military is using game controllers for drones and robots because they are “far more intuitive.” Beyond the fact that the interfaces are intuitive to them, Digital Natives have likely logged hundreds of hours flying simulated jets under trying battle conditions. Finally, they rightly expect that they can access all the operational parameters and technical notes about the plane with a Google search.

Our new workforce is ready to perform like none other in history.

Being able to perform is just the tip of the iceberg; having the right information is the more critical asset. A Digital Native knows information (and technology) is very fast moving and fluid. It also comes from all directions … after all it’s The Information Age. This is a radical paradigm shift. Harvard Researcher David Weinberger highlights in his book Too Big to Know that people are not looking up difficult technical problems in a book or even relying on their own experiences; they query their social networks and discover multiple valid solutions. The diversity of their sources is important to them, and an established hierarchy limits their visibility; inversely, they see leaders who build strict organizational hierarchies as cutting off their access to information and diversity.

Today’s thought worker is on the front lines of the technological revolution. They see all the newness, data, and interaction with a peer-to-peer network. Remember all that code on the screen in the movie The Matrix? You get the picture.

To a Digital Native, the vice presidents of most organizations are business astronauts floating too high above the world to see what’s really going on but feeling like they have perfect clarity. Who really knows the truth? Mission Control or Major Tom? This is especially true with the acceleration of business that we are experiencing. While the Astronaut in Chief is busy ordering the VPs to move the mountains out of the way, the engineers at ground control have already collaborated on a solution to leverage an existing coal mine and sell coal as a byproduct.

The business hierarchy of yesterday worked for a specific reason: workers needed to just follow rules, keep their mouth shut, and obey. Input, no matter how small, was seen as intrusive and insubordinate … and could get one fired. Henry Ford wanted an obedient worker to mass manufacture goods. The digital age requires a smarter worker because, in today’s world, we make very sophisticated stuff that does not conform to simple rules. Responsibility, troubleshooting, and decision-making has moved to the frontlines. This requires open-source style communication.

Do not confuse the Astronaut problem as a lack of respect for authority.

Digital Natives respect informational authority, not positional. For Digital Natives, authority is flexible. They have experience forming and dissolving teams to accomplish a mission. The mission leader is the one with the right knowledge and skills for the situation, not the most senior or highest scoring. In Liquid Leadership, Brad explains that Digital Natives are not expecting managers to solve team problems; they are looking to their leadership to help build, manage, and empower their teams to do it themselves.

So why not encourage more collaboration with a singular mission in mind: develop a better end product? In a world that is expanding at such mercurial speed, a great idea can come from anywhere! Even from a customer! So why not remember to include customers in the process?

Who is Leroy Jenkins?

This viral video is about a spectacular team failure from one individual (Leroy Jenkins) who goes rogue during a team massively multi-player game.  This is a Digital Natives’ version of the ant and grasshopper parable: “Don’t pull a Leroy Jenkins on us—we need to plan this out.”

Think about it like this: Working as a team is like joining a quest.

If comparing work to a game scenario sounds counterintuitive then let’s reframe the situation. We may have the same destination and goals, but we are from very different backgrounds. Some of us speak different languages, have different needs and wants. Some went to MIT, some to community college. Some came through Internet startups, others through competitors. Big, little, educated, and smart. Intense and humble. Outgoing and introverted.  Diversity of perspective creates stronger teams.

This also means that leadership roles rotate according to each mission.

This is the culture of the gaming universe. Missions and quests are equivalent to workplace tasks accomplished and point to benchmarks achieved. Each member excepts to earn a place through tasks and points. This is where Digital Natives’ experience becomes advantage. They expect to advance in experience and skills. When you adapt the workplace to these expectations the Digital Natives thrive.

Leaders need to come down to earth and remove the spacesuit.

A leader at the top needs to stay connected to that information and disruption. Start by removing your helmet. Breathe the same oxygen as the rest of us and give us solutions that can be used here on planet earth.

On Gamification

Jeff Attwood, founder of the community-based FAQ site Stack Overflow, has been very articulate about using game design to influence how he builds communities around sharing knowledge. We recommend reading his post about “Building Social Software for the Anti-Social” on his blog,

Ryan M. Garcia Social Media LawIceholes: How The ALSA May Win The Battle But Lose The War

You know what we do to bad ice on a pedestal?

The biggest surprise hit of the summer is not Guardians of the Galaxy but rather the megaviral smash Ice Bucket Challenge benefiting the ALS Association. Rather than be thankful for this windfall the ALSA has recently decided that they should own this challenge and prevent any other cause or organization from using it. What do you think they are, a charity?

Oh yeah, they are.  Then maybe they should start acting like it and not a bunch of selfish iceholes.

First, some background. The ALSA did not create the ice bucket challenge. The gimmick has been around for a long time. In fact, when this latest round started over the summer it began as a challenge to dump a bucket of ice water on your head or donate $100 to a charity of your choice.  It was only when the challenge first passed to professional golfer Chris Kennedy that the donation was flagged for the ALSA and the individuals he tagged kept the charity when they made their videos.  Later, there was a significant wave of ice bucket activity in Boston due to native ALS sufferer Pete Frates and concerted actions by the Red Sox organization.  Facebook’s data team’s analysis shows that Boston does appear to be the epicenter of the challenge going truly viral.

Nobody is exactly sure why the challenge has reached its current level of popularity, but that’s true for most viral hits in the social media age.  Sure, the videos are funny. And having one person tag several others to participate makes for an exponential reach. And having the challenge somehow associated with charity so we all think we can have fun while helping out a worthy cause makes it seem nice too. There are even a scattering of super serious videos in the mix depicting a bit of what the disease means to its victims and their families. We can identify all the elements but we still don’t know what made this challenge go viral like it did.  Heck, even I did one.  Although I’m not linking it after the reasons behind this post.

That doesn’t really matter though. It doesn’t matter that we can’t explain why it went viral; it went viral. It doesn’t matter that perhaps the amount of money we give to charities is out of proportion to the impact of the disease as IFLScience linked in a Vox article infographic; there is no doubt this is a horrific disease and increased attention to it is a good thing. It doesn’t matter that ALSA only spends a small percentage of its budget on research; it performs several other valuable services and all charities have to spend a lot of money to ultimately make more money in the end.

Here’s what does matter: the ALSA was given the greatest gift of their life in terms of this ice bucket challenge.  Donations are through the roof.  Yesterday they reported making over $94.3 million in donations in just the last month.  Last year, in the same time period, they received around $2.7 million.  Rather than just say thanks or give the tearful Sally Field “You like me, you really like me!” Oscar acceptance speech they decided to go another direction. They decided to take that warm fuzzy feeling we’ve had from watching or making these videos and donating to a worthy cause and pour a giant bucket of ice water on our flames of altruism.

As first reported on the Erik M Pelton & Associates blog, the ALSA filed an application with the US Patent and Trademark Office to be granted a trademark for the term ICE BUCKET CHALLENGE as used for any charitable fundraising.  They also filed an application for ALS ICE BUCKET CHALLENGE but it’s the main application that should make people furious.  Heck, it made me enough to write a blog post on a Thursday night and I never do that.

Filing a trademark for the term “Ice Bucket Challenge” would allow them to prevent any other charity from promoting a campaign that the ALSA had fall into their lap.  The ALSA did not create this concept.  They did not market this campaign until it already went viral.  They have no responsibility whatsoever for this going viral.  If the ice bucket challenge had found a connection to the American Heart Association or the American Cancer Society then it could have gone just as viral.

What on earth could make the ALSA think they should have any right whatsoever to prevent someone else from using this challenge?

I can’t think of a good reason.  I can think of reasons, mind you.  They just aren’t good.  Fortune was able to get a statement from ALSA spokesperson Carrie Munk:

The ALS Association took steps to trademark Ice Bucket Challenge after securing the blessings of the families who initiated the challenge this summer. We did this as a good faith effort after hearing that for-profit businesses were creating confusion by marketing ALS products in order to capitalize on this grassroots charitable effort.

Sorry, ALSA, but that excuse doesn’t hold water.

First, obtaining the blessings of the families who created this challenge is nonsense.  Even if you got permission from everyone who ever did an ice bucket challenge–SO WHAT?  This was a charity drive.  You think the first charity to earn a million dollars from a bake sale should get to stop all other bake sales?  Because that’s what filing a trademark on the challenge is an attempt to do–you’re trying to stop any other charity from using the term for fundraising.

Second, you heard some shady companies were making money off the Ice Bucket Challenge?  Wow, that must be weird.  To think there are these companies just sitting around making money off something they didn’t create.  JUST LIKE YOU.  Who cares if someone makes an Ice Bucket Challenge shirt and sells it?  If it says ALSA on it or has your logo you can already go after them without this new trademark application.

The ALSA’s actions are atrocious and reprehensible.  They may have raised a ton of money this summer but it could all backfire over a move like this.

But here, ALSA, I’m going to be nicer than you appear to be.  Here’s a way for you to cover your cold, soaked behinds and spin this in a favorable way.  What you should have done is post on your website the day you filed the application, saying that you are only doing so to protect all charities from shady profiteers but that all charities would be free to use the mark forever for no charge if you received the trademark.  The fact that you didn’t tell anyone about the application and only commented when it was called out on social media (by the way, you’ve heard about this social media thing and how a lot of people use it, right?) you can just blame on being so busy counting all your money.  It’s a bad excuse, but maybe it can save some face.

Because right now you look like a bunch of iceholes and I resent every penny I gave you.  Not for the good work you’ve done, which is a lot, or the families you’ve helped, which are numerous, but for being greedy instead of generous, selfish instead of, you know, charitable.

Update Aug 29: The ALSA has withdrawn their trademark application. Good.

Jason BocheVMworld 2014 U.S. Top Ten Sessions

Following is the tabulated listing of VMworld 2014 U.S. top ten session as of noon PST 8/28/14. If you plan on catching up on recorded sessions later, this top ten list should be highly considered. Nice job goes out to all of the presenters in this list as well as all presenters at VMworld.

Tuesday – STO1965.1 – Virtual Volumes Technical Deep Dive
Rawlinson Rivera, VMware
Suzy Visvanathan, VMware

Tuesday – NET1674 – Advanced Topics & Future Directions in Network Virtualization with NSX
Bruce Davie, VMware

Tuesday – BCO1916.1 – Site Recovery Manager and Stretched Storage: Tech Preview of a New Approach to Active-Active Data Centers
Shobhan Lakkapragada, VMware
Aleksey Pershin, VMware

Tuesday – INF1522 – vSphere With Operations Management: Monitoring the Health, Performance and Efficiency of vSphere with vCenter Operations Manager
Kyle Gleed, VMware
Ryan Johnson, VMware

Tuesday – SDDC3327 – The Software-defined Datacenter, VMs, and Containers: A “Better Together” Story
Kit Colbert, VMware

Tuesday – SDDC1600 – Art of IT Infrastructure Design: The Way of the VCDX – Panel
Mark Gabryjelski, Worldcom Exchange, Inc.
Mostafa Khalil, VMware
chris mccain, VMware
Michael Webster, Nutanix, Inc.

Tuesday – VAPP1318.1 – Virtualizing Databases Doing IT Right – The Sequel
Michael Corey, Ntirety – A Division of Hosting
Jeff Szastak, VMware

Tuesday – SEC1959-S – The “Goldilocks Zone” for Security
Martin Casado, VMware
Tom Corn, VMware

Monday – HBC1533.1 – How to Build a Hybrid Cloud – Steps to Extend Your Datacenter
Chris Colotti, VMware
David Hill, VMware

Monday – INF1503 – Virtualization 101
Michael Adams, VMware

Post from: - VMware Virtualization Evangelist

Copyright (c) 2010 Jason Boche. The contents of this post may not be reproduced or republished on another web page or web site without prior written permission.

VMworld 2014 U.S. Top Ten Sessions

Kevin HoustonIDC Worldwide Server Tracker – Q2 2014 Released

The Q2 2014  IDC Worldwide Server Tracker was released on August 26, 2014 and it reported that the demand for x86 servers improved in 2Q14 with revenues increasing 7.8% year over year in the quarter to $9.8 billion worldwide as unit shipments increased 1.5% to 2.2 million servers. HP led the market with 29.6% revenue share based on 7.4% revenue growth over 2Q13. Dell retained second place, securing 21.2% revenue share.


Modular servers – blades and density-optimized – represent distinct segments of growth for vendors in an otherwise mature market,” said Jed Scaramella, Research Director, Enterprise Servers and Datacenter at IDC. “As the building block for integrated systems, blade servers will continue to drive enterprise customers along the evolutionary path toward private clouds. On the opposite side of the spectrum, density-optimized servers are being rapidly adopted by hyperscale datacenters that favor the scalability and efficiency of the form factor.”

If you want to read the entire press release, please visit


Kevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

William LearaQuick-Start Guide to UDK2014

Getting the UEFI Development Kit (UDK) installed and building is the first step in attempting to work in BIOS development.  Here is my experience getting the latest version of the UDK, UDK 2014, to work in Windows.

Step 1Download UDK 2014 (101MB)

Step 2:  The main .ZIP is a collection of .ZIPs.  First, extract

Step 3:  This is tricky:  you next have to unzip BaseTools(Windows).zip, and it has to be put in a subdirectory of the MyWorkSpace directory from Step 2.  The “BaseTools” directory should be at a peer level to Build, Conf, CryptoPkg, etc.  Note that this will entail overwriting several files, e.g., EDKSETUP.BAT—this is okay.  The final directory structure should look like:






Step 4:  Open a Command Prompt and cd to MyWorkSpace\.  Type the command

edksetup --NT32

to initialize the build environment.

Step 5:  Build the virtual BIOS environment:

> build -t VS2008x86 for Visual Studio 2008

> build -t VS2010x86 for Visual Studio 2010

Step 6:  Launch SECMAIN.EXE from the directory:


imageA virtual machine will start and you will boot to an EFI shell.  Type “help” for a list of commands—see Harnessing the UEFI Shell (below) for more information re: the UEFI shell.  Congratulations, at this point you are ready to develop PEI modules and DXE drivers!

That is the absolute minimum work necessary to boot to the NT32 virtual machine.  There is additional information in the file UDK2014-ReleaseNotes-MyWorkSpace.txt, which is included in MyWorkSpace\.


Jason BocheVMware vCenter Site Recovery Manager 5.8 First Look

VMware vCenter Site Recovery Manager made it’s debut this week at VMworld 2014 in San Francisco.  Over the past few weeks I’ve had my hands on a release candidate version and I’ve put together a short series of videos highlighting what’s new and also providing a first look at SRM management through the new web client plug-in.  I hope you enjoy.

I’ll be at VMworld through the end of the week.  Stop and say Hi – I’d love to meet you.


VMware vCenter Site Recovery Manager 5.8 Part 1

VMware vCenter Site Recovery Manager 5.8 Part 2

VMware vCenter Site Recovery Manager 5.8 Part 3

Post from: - VMware Virtualization Evangelist

Copyright (c) 2010 Jason Boche. The contents of this post may not be reproduced or republished on another web page or web site without prior written permission.

VMware vCenter Site Recovery Manager 5.8 First Look

William LearaUncrustify Your BIOS

One of my favorite newsletters is Jack Ganssle’s The Embedded Muse.  In a recent issue, Jack discussed helpful tools for embedded systems development, and the tool Uncrustify came up.  I decided to run the tool on the UDK 2014 source, and this post discusses the results.

Uncrustify is an open-source code beautifier, comparable to other popular alternatives such as GNU Indent or Artistic Style.  Code beautifiers (a.k.a. pretty-printers) make code easier to read. They automatically update source code to use one consistent style throughout.  The user creates a configuration file that contains specifications for the types of code changes to make:  tab/space settings, newline options, brace styles, etc.  After feeding the configuration file and target source code into the beautifier tool, the tool modifies the source code according to the user’s specified configuration.  After I dug further into Uncrustify, however, I discovered the real star of the show—Universal Indent GUI!

By themselves, the various code beautifiers like Uncrustify are cumbersome to use.  Much time is spent examining all the various configuration options (which number in the hundreds) and manually editing terse configuration files—a tedious affair.  Thankfully, graphical front-ends exist for these tools, and Universal Indent GUI is best-in-class.

Here are four great features of Universal Indent GUI:


1.  Universal Indent GUI contains all the various code beautifier applications.

NonameNo need to download and install Uncrustify, GNU Indent, or any of the others.  Just select your desired code beautifier application and Universal Indent GUI will update its interface to display those options pertinent to the selected beautifier.  There are twenty-four different code beautifier applications supported by Universal Indent GUI!


2.  An elegant help system

NonameThe popular code beautifier applications offer literally hundreds of options.  Having to read through PDFs or on-line HTML pages in order to absorb all the many configuration settings is extremely tedious.  The genius of Universal Indent GUI is that a user can hover over an option and trigger a yellow popup containing an explanation of each particular configuration option.  The user can change those options important to him and ignore the rest.  Simple and intuitive!


3.  Live Indent Preview

NonameEven with the nice help system, nothing beats actually viewing the source code with the various options applied so you can make sure you are getting exactly what you think you’re getting.  Universal Indent GUI allows you to open a source code file, turn on the Live Indent Preview feature, and see your source code respond to configuration changes in real time.


4.  Universal Indent GUI outputs configuration files and batch files

NonameOnce you’ve selected the options important to you and configured them, a couple clicks will allow you to either a) save a configuration file ready for your code beautifier application; and/or b) create a batch file/shell script that will automatically apply your new configuration file to a source code directory tree.  These files can then be shared among all the various members of your development team to ensure consistent style.  Moreover, a source code repository pre-commit hook could be established to enforce a standard programming style.


Universal Indent GUI:  Summary

Universal Indent GUI has several other convenient configuration options which are simple and do not get in your way.  The application is available for both Windows and Linux.  There is no special installation required—simply unzip and execute.  I was very impressed with this tool, and highly recommend it to anyone who considers programming style an important characteristic of well-crafted software.  Tip:  use the Uncrustify config.txt file in order to browse what Uncrustify options are available within Universal Indent GUI.


UEFI BIOS Coding Standards

Intel has created a coding standards guide for EDK II.  Below are the parts of the coding standards that could possibly be enforced by a code beautifier application, along with the Uncrustify options I selected in Universal Indent GUI in order to make the UDK 2014 source code Intel-coding-standards-compliant:  (correct, the Intel UDK is not compliant with the Intel coding standards…)

  • Limit line length to 80 characters


  • 2 spaces of indentation


  • Never use tab characters.
    • Set editor to insert spaces rather than a tab character.


  • if, for, while, etc. always use { }, even when there is only one statement


    • The opening brace ({) should always appear at the end of the line previous line.


  • The opening brace ({) for a function should always appear separately on a new line.


Using Universal Indent GUI, I created the following batch and configuration files for Uncrustify to operate on the UDK source:

Running it on the UDK 2014 code base (file took about one minute on my 3GHz 8-core Windows 8 system.  Sample:


The job made many changes, mostly around enforcing the 80 column limit which the UDK source does not adhere to.  I also noticed that trailing spaces were removed from lines.  I think it would be a lot of fun to play with all the various Uncrustify options and use the tool to automate work.

Do you use a code beautifier application in your organization?  Are they helpful, or a hindrance?  Leave a comment!  What are your experiences with these tools:  positive or negative?  Which one of the many code beautifier applications have you tried?  Leave a comment!

Ravikanth ChagantiSession Slides: Community Day 2014 – Introduction to Microsoft Azure Compute

Microsoft Azure offers several services each categorized into one of the four major categories – Compute, Data, App, and Network Services. This session takes you through an overview of the Microsoft Azure Compute Services. Introduction to Microsoft Azure Compute from Ravikanth Chaganti  

Rob HirschfeldCloud Culture: Reality has become a video game [Collaborative Series 3/8]

This post is #3 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.


Yes. Video games are the formative computer user experience (a.k.a. UX) for nearly everyone born since 1977. Genealogists call these people Gen X, Gen Y, or Millennials, but we use the more general term “Digital Natives” because they were born into a world surrounded by interactive digital technology starting from their toys and learning devices.

Malcolm Gladwell explains, in his book Outliers, that it takes 10,000 hours of practice to develop a core skill. In this case, video games have trained all generations since 1977 in a whole new way of thinking. It’s not worth debating if this is a common and ubiquitous experience; instead, we’re going to discuss the impact of this cultural tsunami.

Before we dive into impacts, it is critical for you to suspend your attitude about video games as a frivolous diversion. Brad explores this topic in Liquid Leadership, and Jane McGonnagle, in Reality is Broken, spends significant time exploring the incredibly valuable real world skills that Digital Natives hone playing games. When they are “gaming,” they are doing things that adults would classify as serious work:

  • Designing buildings and creating machines that work within their environment
  • Hosting communities and enforcing discipline within the group
  • Recruiting talent to collaborate on shared projects
  • Writing programs that improve their productivity
  • Solving challenging mental and physical problems under demanding time pressures
  • Learning to persevere through multiple trials and iterative learning
  • Memorizing complex sequences, facts, resource constraints, and situational rules.

Why focus on video gamers?

Because this series is about doing business with Digital Natives and video games are a core developmental experience.

The impact of Cloud Culture on technology has profound implications and is fertile ground for future collaboration between Rob and Brad.  However, we both felt that the challenge of selling to gamers crystallized the culture clash in a very practical and financially meaningful sense.  Culture can be a “soft” topic, but we’re putting a hard edge on it by bringing it home to business impacts.

Digital Natives play on a global scale and interact with each other in ways that Digital Immigrants cannot imagine. Brad tells it best with this story about his nephew:

Years ago, in a hurry to leave the house, we called out to our video game playing nephew to join us for dinner.

“Sebastian, we’re ready.” I was trying to be as gentle as possible without sounding Draconian. That was the parenting methods of my father’s generation. Structure. Discipline. Hierarchy. Fear. Instead, I wanted to be the Cool Uncle.

“I can’t,” he exclaimed as wooden drum sticks pounded out their high-pitched rhythm on the all too familiar color-coded plastic sensors of a Rock Band drum kit.

“What do you mean you can’t? Just stop the song, save your data, and let’s go.”

“You don’t understand. I’m in the middle of a song.” Tom Sawyer by RUSH to be exact. He was tackling Neil Peart. Not an easy task. I was impressed.

“What do you mean I don’t understand? Shut it off.” By now my impatience was noticeable. Wow, I lasted 10 seconds longer than my father if he had been in this same scenario. Progress I guess.

And then my 17-year-old nephew hit me with some cold hard facts without even knowing it… “You don’t understand… the guitar player is some guy in France, and the bass player is this girl in Japan.”

In my mind the aneurism that was forming just blew… “What did he just say?”

And there it was, sitting in my living room—a citizen of the digital age. He was connected to the world as if this was normal. Trained in virtualization, connected and involved in a world I was not even aware of!

My wife and I just looked at each other. This was the beginning of the work I do today. To get businesses to realize the world of the Digital Worker is a completely different world. This is a generation prepared to work in The Cloud Culture of the future.

A Quote from Liquid Leadership, Page 94, How Technology Influences Behavior…

In an article in the Atlantic magazine, writer Nicholas Carr (author of The Shallows: What the Internet Is Doing to Our Brains) cites sociologist Daniel Bell as claiming the following: “Whenever we begin to use ‘intellectual technologies’ such as computers (or video games)—tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies.

In other words, the technology we use changes our behavior!

There’s another important consideration about gamers and Digital Natives. As we stated in post 1, our focus for this series is not the average gamer; we are seeking the next generation of IT decision makers. Those people will be the true digital enthusiasts who have devoted even more energy to mastering the culture of gaming and understand intuitively how to win in the cloud.

“All your base belongs to us.”

Translation: If you’re not a gamer, can you work with Digital Natives?

Our goal for this series is to provide you with actionable insights that do not require rewriting how you work. We do not expect you to get a World of Warcraft subscription and try to catch up. If you already are one then we’ll help you cope with your Digital Immigrant coworkers.

In the next posts, we will explain four key culture differences between Digital Immigrants and Digital Natives. For each, we explore the basis for this belief and discuss how to facilitate Digital Natives decision-making processes.

Rob HirschfeldCloud Culture Series TL;DR? Generation Cloud Cheat sheet [Collaborative Series 2/8]


This post is #2 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Your attention is valuable to us! In this section, you will find the contents of this entire blog series distilled down into a flow chart and one-page table.  Our plan is to release one post each Wednesday at 1 pm ET.

Graphical table of contents

flow chartThe following flow chart is provided for readers who are looking to maximize the efficiency of their reading experience.

If you are unfamiliar with flow charts, simply enter at the top left oval. Diamonds are questions for you to choose between answers on the departing arrows. The curved bottom boxes are posts in the series.

Culture conflict table (the Red versus Blue game map)

Our fundamental challenge is that the cultures of Digital Immigrants and Natives are diametrically opposed.  The Culture Conflict Table, below, maps out the key concepts that we explore in depth during this blog series.

Digital Immigrants (N00Bs) Digital Natives (L33Ts)
Foundation: Each culture has different expectations in partners
  Obey Rules

They want us to prove we are worthy to achieve “trusted advisor” status.

They are seeking partners who fit within their existing business practices.

Test Boundaries

They want us to prove that we are innovative and flexible.

They are seeking partners who bring new ideas that improve their business.

  1. Organizational Hierarchy see No Spacesuits (Post 4)
  Permission Driven

Organizational Hierarchy is efficient

Feel important talking high in the org

Higher ranks can make commitments

Bosses make decisions (slowly)

Peer-to-Peer Driven

Organizational Hierarchy is limiting

Feel productive talking lower in the org

Lower ranks are more collaborative

Teams make decisions (quickly)

  1. Communication Patterns see MMOG as Job Training (Post 5)
  Formalized & Structured

Waits for Permission

Bounded & Linear

Requirements Focused

Questions are interruptions

Casual & Interrupting

Does NOT KNOW they need permission

Open Ended

Discovered & Listening

Questions show engagement

  1. Risks and Rewards see Level Up (Post 6)
  Obeys Rules

Avoid Risk—mistakes get you fired!

Wait and see

Fear of “looking foolish”

Breaks Rules

Embrace Risk—mistakes speed learning

Iterate to succeed

Risks get you “in the game”

  1. Building your Expertise see Becoming L33T (Post 7)
Knowledge is Concentrated

Expertise is hard to get (Diploma)

Keeps secrets (keys to success)

Quantitate—you can measure it

Knowledge is Distributed and Shared

Expertise is easy to get (Google)

Likes sharing to earn respect

Qualitative—trusts intuition

Hopefully, this condensed version got you thinking.  In the next post, we start to break this information down.



William LearaA Book Every BIOS Engineer Will Love

Vincent Zimmer published a blog post asking if there was a particular book that inspired your choice of profession.  For me, one of my favorite and most inspiring books is The Soul of a New Machine, by Tracy Kidder.  Here, I’m not alone—this book won the Pulitizer Prize in the early 1980s and is widely admired by many people, especially those who work at computer hardware companies.
imageThe book tells the story of Data General Corporation designing their first 32-bit minicomputer.  You may be thinking “that sounds like the dullest thing I can possibly think of”, but it’s a wonderful and entertaining story.  One of my favorite parts is in the Prologue.  (see, it gets good quickly!)

The Prologue begins with a story of five guys who go sailing in order to enjoy a short, stress-free, vacation.  Four are friends, but they needed a fifth, so they bring along an interested friend-of-a-friend:  Mr. Tom West.

Tom West is the book’s protagonist and the project leader of the aforementioned new Data General 32-bit minicomputer effort.  He became a hero to computer engineers after the publication of Soul of a New Machine.

But back to the sailboat—one evening, an unexpected storm assails the small boat.  The storm is unexpected in timing, and also unexpected in strength—these amateur sailors fear for their lives.  Tom West keeps his cool, takes charge, goes into action, and, to cut to the chase, the crew survives just fine.
Months after that sailing expedition, the captain, a member of the crew (who was a psychologist by profession), and the rest of the crew (sans West) are sitting around reminiscing:
The people who shared the journey remembered West.  The following winter, describing the nasty northeaster over dinner, the captain remarked, “That fellow West is a good man in a storm.”  The psychologist did not see West again, but remained curious about him.  “He didn’t sleep for four nights!  Four whole nights.”  And if that trip had been his idea of a vacation, where, the psychologist wanted to know, did he work?
And so the reader is launched into the riveting story of Data General creating the Eclipse MV/8000.  It’s a story of corporate intrigue, late nights, tough debugging sessions, colorful personalities, and, against all odds, ultimately a successful and satisfying product launch.

Chapter Nine is dedicated to Tom; his upbringing, his home, and his daily routine.  A funny Tom West anecdote:
Another story made the rounds:  that in turning down a suggestion that the group buy a new logic analyzer, West once said, “An analyzer costs ten thousand dollars.  Overtime for engineers is free.”
But the entire book isn’t just about Tom West.  It’s a beautifully crafted adventure story about how this group of eccentric hardware and firmware guys worked around the clock for over a year to produce a great machine.  An example chapter title:  The Case of the Missing NAND Gate. (!)

Wired magazine wrote a great article about the book.  Here’s a snippet:
More than a simple catalog of events or stale corporate history, Soul lays bare the life of the modern engineer - the egghead toiling and tinkering in the basement, forsaking a social life for a technical one. It's a glimpse into the mysterious motivations, the quiet revelations, and the spectacular devotions of engineers—and, in particular, of West. Here is the project's enigmatic, icy leader, the man whom one engineer calls the "prince of darkness," but who quietly and deliberately protects his team and his machine. Here is the raw conflict of a corporate environment, factions clawing for resources as West shields his crew from the political wars of attrition fought over every circuit board and mode bit. Here are the power plays, the passion, and the burnout - the inside tale of how it all unfolded.
Mr. West died in 2011 at the age of 71.

I cannot do justice to this book—PLEASE do yourself a favor and pick it up.  You will not regret it.

What about you?  Is there a book that inspired you, or continues to inspire you in your vocation?  Leave a comment!

William LearaWelcome!

I’m starting a new blog in order to discuss BIOS programming—the art and science of bootstrap firmware development for computers.  In addition, I expect to discuss general software development topics and my affinity for all things computer related.  My intent is to participate in the BIOS community, share what I’m learning, and learn from all of you.  I hope you will subscribe to the blog (via RSS or email) and use the commenting facility to discuss the content!


William LearaWill I Be Jailed For Saying “UEFI BIOS”?

To hear some people talk, it is a crime to say “UEFI BIOS”.  No, they insist, there was “BIOS”, which has been supplanted by “UEFI”, or “UEFI firmware”.
You do not have a ‘UEFI BIOS’. No-one has a ‘UEFI BIOS’. Please don’t ever say ‘UEFI BIOS’.
Microsoft, in particular, tries hard to drive home this distinction—that computers today have gotten rid of BIOS and now use UEFI.  The Wikipedia article on UEFI implies something similar.

Is this distinction helpful?  Is it accurate?  The fact of the matter is that from the earliest days of the microcomputer revolution, the mid-to-late 1970s, computers have required a bootstrap firmware program. Following the lead of Gary Kildall’s CP/M, this program was called the BIOS.  IBM introduced their PC in 1981 and continued to use the term BIOS.  Just because the industry has embraced a new standard, UEFI, does not mean that somehow the term “BIOS” refers to something else.  I know from my work experience as a BIOS developer that my colleagues and I use the term “UEFI BIOS”—we used to have Legacy BIOS, now we have UEFI BIOS.  It’s still the system’s bootstrap firmware.

Here’s an article from Darien Graham-Smith of PC Pro introducing UEFI and using the term “UEFI BIOS”:

Let’s look to the real experts to see what they say—namely, Intel, the originators of the UEFI standard.  Intel dedicated an entire issue of the Intel Technology Journal (Volume 15, Issue 1) to UEFI.  In that journal, the term “UEFI BIOS” was used a total of six times.  Example:
The UEFI BIOS is gaining new capabilities because UEFI lowers the barrier to implementing new ideas that work on every PC.
This edition of the Intel Technology Journal was written by a veritable who’s who of the BIOS industry:  Intel, IBM, HP,Clipboarder.2014.08.06 (2) AMI, Phoenix Technologies, Lenovo, and Insyde, including some of the Founding Fathers of UEFI:  Vincent Zimmer and Michael Rothman.  If they did not see this term as incorrect, then neither should we.

While the UEFI Spec itself does not appear to use the term “UEFI BIOS”, it does use the term “Legacy BIOS” to refer to the older standard, which to me implies that UEFI is the new, non-legacy BIOS.

Anyway, this question is not likely to become one of the great debates of our time, but I propose that the term “UEFI BIOS” is perfectly acceptable.  Now, on to UEFI BIOS programming!

William LearaThe Case of the Mysterious __chkstk

I was making a small change to a function:  adding to it a couple UINTN auto variables, a new auto EFI_GUID variable, and a handful of changed lines.

Suddenly, the project would no longer compile.  I got this error message from the Microsoft linker:

TSEHooks.obj : error LNK2019: unresolved external symbol __chkstk referenced in function PostProcessKey

Build\TSE.dll : fatal error LNK1120: 1 unresolved externals

NMAKE : fatal error U1077: 'C:\WinDDK\7600.16385.1\bin\x86\amd64\LINK.EXE' : return code '0x460'

Build Error!!

This surprised me—why is the linker complaining?  “unresolved external symbol”—I didn’t add a new function call, and neither did I add an extern reference.  Are my linker paths messed up somehow?  After burning lots of time trying various wild goose chases I started searching more for this “__chkstk”—what is that?

I started searching Google for help, and found a forum posting with the following comment:

The "chkstk" unresolved external is caused by the compiler checking to see if you've occupied more than (I think 4K on an x86 system) stack space for local variables…
Could I have pushed the function over the maximum stack space?  As I mentioned, I only added two UNITNs (8B each) and an EFI_GUID (16B) for 32B total.

Looking further I noticed that one of the already existing auto variables in this function was a SETUP_DATA structure variable—the variable type that holds all the BIOS Setup program settings information.  This was the problem—there are over 1200 variables contained in this one structure!

After further investigation, I found the following from Microsoft:

__chkstk Routine

Called by the compiler when you have more than one page of local variables in your function.

__chkstk Routine is a helper routine for the C compiler.  For x86 compilers, __chkstk Routine is called when the local variables exceed 4K bytes; for x64 compilers it is 8K.

My solution was going to be to move the SETUP_DATA variable to file scope with internal linkage, but to my surprise I found someone had already done that!  So, there was a file-scope SETUP_DATA variable, and then someone created another automatic SETUP_DATA variable within the scope of one of the functions.  Messy!  Anyway, it made my job easier—I simply removed the auto copy of SETUP_DATA and the linker error went away.

Two Takeaways

1) Microsoft, couldn’t there by a better message for communicating that the function has violated its stack space?  Something like:

Stackoverflow in function PostProcessKey:  Requested X bytes, maximum limit is 8192 bytes

rather than:

LNK2019: unresolved external symbol __chkstk referenced in function PostProcessKey

2) Developers, be on the lookout for usages of the BIOS Setup data structure.  I’m guessing it’s probably the largest of all the UEFI variables, and by a good margin.

Mark CathcartPower corrupts

Power corrupts; absolute power corrupts absolutely

Famously said by John Dalberg-Acton, the historian and moralist, first expressed this opinion in a letter to Bishop Mandell Creighton in 1887. I was reminded of it on Friday when it was announced that Governor Rick Perry of Texas had been indicted.

Abbott and PerryAlthough I’m clearly more of a social activist than Republican, Conservative, this post isn’t really about politics. It may or may not be that Perry has a case to answer. What is clear is that the lack of a term limit for the Governor of Texas has, as always, allowed the Governor to focus more on his succession, more on his politics, than the people that elected him and their needs.

I’m personally reminded of Margaret Thatcher, who enacted swathing changes in her time, but in her 3rd term, spent more time inward looking, in-fighting, that outward looking. More focused on those that would succeed her than what the country needed to succeed. Major, Howe, Heseltine, Lawson. et al.

jmmtThatcher these days is remembered mostly for consolidating her own power and the debacle that ended her reign rather than her true legacy, creating the housing crisis; and the banking crisis. Thatchers government started moving people to incapacity benefit rather than unemployment to hide the true state of the economy from the people. Blair, Brown, mostly the same, after a couple of years of shifting emphasis and politics it became the same farcical self protection.

And so it has become the same with Perry and his legacy. Irrespective of the merit of this indictment, what’s clear is that Perrys normal has changed to defending his legacy and Abbott. Abbott meanwhile moves to make as much as possible secret about Perrys activities. This includes the detail of Governor Perrys’ expense claims, sensitive, secret but not limited to that. Abbot also feels the location of chemical storage is also a threat to our liberty, and not to be easily publicly accessible. Redaction it would appear, is a lost art.

For the layman it is impossible to understand how/who/what of CPRIT affair is real. Was Abotts oversight of CPRIT politically motivated? Did Abbott really turn a blind eye to the goings on at CPRIT and did Perry and his staff know about and approve this?

British Prime Minister Tony Blair (L) anIf they did, then their pursuit of Lehmberg is bogus, their attempts to stop the Public Integrity Unit(PIU), self serving, And there is the rub, it really doesn’t matter if it was legal or not. Perry needs to go, term limits should mandate not more than two sessions, and Abbott should be seriously questioned about his motivation, otherwise as Thatcher goes, Major goes; as Blair goes, so Brown goes; As Perry goes, so Abbott goes, and the result of too much power be shared out as a grace and favor does no one, not least the local tax payers any good at all.

And for the record, Lehmbergs arrest for drink driving was shameful, and yes she should of resigned. But because she didn’t doesn’t make it OK for the Governor to abuse his power to try to remove her. Don’t let the Lehmberg arrest though distract from the real issue, abuse of power and term limits.

Gina MinksThe thing about it: it just sucks.

I know I’m really lucky. I have a job I like to do, great boss, great people to work with. It’s steady pay with good benefits. I have two awesome kids, I live in a great place. We all know #FredTheDog is the best dog in the entire world. My childhood had issues – goodness knows nothing like many of my friends. Again, lucky. My parents didn’t do drugs or drink, mostly because they were

read more here

Hollis Tibbetts (Ulitzer)ARM Server to Transform Cloud and Big Data to "Internet of Things"

A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some years to come - growing to over 20% of the server market by 2016 according to Oppenheimer ("Cloudy With A Chance of ARM" Oppenheimer Equity Research Industry Report).

read more

Rob HirschfeldYour baby is ugly! Picking which code is required for Commercial Core.

babyThere’s no point in sugar-coating this: selecting API and code sections for core requires making hard choices and saying no.  DefCore makes this fair by 1) defining principles for selection, 2) going slooooowly to limit surprises and 3) being transparent in operation.  When you’re telling someone who their baby is not handsome enough you’d better be able to explain why.

The truth is that from DefCore’s perspective, all babies are ugly.  If we are seeking stability and interoperability, then we’re looking for adults not babies or adolescents.

Explaining why is exactly what DefCore does by defining criteria and principles for our decisions.  When we do it right, it also drives a positive feedback loop in the community because the purpose of designated sections is to give clear guidance to commercial contributors where we expect them to be contributing upstream.  By making this code required for Core, we are incenting OpenStack vendors to collaborate on the features and quality of these sections.

This does not lessen the undesignated sections!  Contributions in those areas are vital to innovation; however, they are, by design, more dynamic, specialized or single vendor than the designated areas.

Designated SectionsThe seven principles of designated sections (see my post with TC member Michael Still) as defined by the Technical Committee are:


  1. code provides the project external REST API, or
  2. code is shared and provides common functionality for all options, or
  3. code implements logic that is critical for cross-platform operation


  1. code interfaces to vendor-specific functions, or
  2. project design explicitly intended this section to be replaceable, or
  3. code extends the project external REST API in a new or different way, or
  4. code is being deprecated

While the seven principles inform our choices, DefCore needs some clarifications to ensure we can complete the work in a timely, fair and practical way.  Here are our additions:

8.     UNdesignated by Default

  • Unless code is designated, it is assumed to be undesignated.
  • This aligns with the Apache license.
  • We have a preference for smaller core.

9.      Designated by Consensus

  • If the community cannot reach a consensus about designation then it is considered undesignated.
  • Time to reach consensus will be short: days, not months
  • Except obvious trolling, this prevents endless wrangling.
  • If there’s a difference of opinion then the safe choice is undesignated.

10.      Designated is Guidance

  • Loose descriptions of designated sections are acceptable.
  • The goal is guidance on where we want upstream contributions not a code inspection police state.
  • Guidance will be revised per release as part of the DefCore process.

In my next DefCore post, I’ll review how these 10 principles are applied to the Havana release that is going through community review before Board approval.

Ravikanth ChagantiTransforming the Data Center – Bangalore, India

Microsoft MVP community, Bangalore IT Pro, Bangalore PowerShell User Group, and Microsoft are proud to announce the Transform Data Center (in-person) event in Bangalore, India. This event is hosted at the Microsoft Office in Bangalore. Registration (limited seats): I will speaking here on Azure Backup and Azure Hyper-V Recovery Manager. Deepak Dhami (PowerShell MVP) will…

Rob HirschfeldCloud Culture: New IT leaders are transforming the way we create and purchase technology. [Collaborative Series 1/8]

Subtitle: Why L33Ts don’t buy from N00Bs

Brad Szollose and I want to engage you in a discussion about how culture shapes technology [cross post link].  We connected over Brad’s best-selling book, Liquid Leadership, and we’ve been geeking about cultural impacts in tech since 2011.

Rob Hirschfeld



In these 8 posts, we explore what drives the next generation of IT decision makers starting from the framework of Millennials and Boomers.  Recently, we’ve seen that these “age based generations” are artificially limiting; however, they provide a workable context this series that we will revisit in the future.

Our target is leaders who were raised with computers as Digital Natives. They approach business decisions from a new perspective that has been honed by thousands of hours of interactive games, collaboration with global communities, and intuitive mastery of all things digital.

The members of this “Generation Cloud” are not just more comfortable with technology; they use it differently and interact with each other in highly connected communities. They function easily with minimal supervision, self-organize into diverse teams, dive into new situations, take risks easily, and adapt strategies fluidly. Using cloud technologies and computer games, they have become very effective winners.

In this series, we examine three key aspects of next-generation leaders and offer five points to get to the top of your game. Our goal is to find, nurture, and collaborate with them because they are rewriting the script for success.

We have seen that there is a technology-driven culture change that is reshaping how business is being practiced.  Let’s dig in!

What is Liquid Leadership?

“a fluid style of leadership that continuously sustains the flow of ideas in an organization in order to create opportunities in an ever-shifting marketplace.”

Forever Learning?

In his groundbreaking 1970s book, Future Shock, Alvin Toffler pointed out that in the not too distant future, technology would inundate the human race with all its demands, overwhelming those not prepared for it. He compared this overwhelming feeling to culture shock.

Welcome to the future!

Part of the journey in discussing this topic is to embrace the digital lexicon. To help with translations we are offering numerous subtitles and sidebars. For example, the subtitle “L33Ts don’t buy from N00Bs” translates to “Digital elites don’t buy from technical newcomers.”

Loosen your tie and relax; we’re going to have some fun together.  We’ve got 7 more posts in this cloud culture series.  

We’ve also included more background about the series and authors…

Story Time: When Rob was followed out of the room

Culture is not about graphs and numbers, it’s about people and stories. So we begin by retelling the event that sparked Rob’s realization that selling next-generation technology like cloud is not about the technology but the culture of the customer.

A few years ago, I (Rob) was asked to join an executive briefing to present our, at the time, nascent OpenStack™ Powered Cloud solution to a longtime customer. As a non-profit with a huge Web presence, the customer was in an elite class and rated high ranking presenters with highly refined PowerPoint decks; unfortunately, these executive presentations also tend to be very formal and scripted. By the time I entered late in the day, the members of the audience were looking fatigued and grumpy. 

Unlike other presenters, I didn’t have prepared slides, scripted demos, or even a fully working product. Even worse, the customer was known as highly technical and impatient. Frankly, the sales team was already making contingency plans and lining up a backup presenter when the customer chewed me up and spit me out. Given all these deficits, my only strategy was to ask questions and rely on my experience.

That strategy was a game changer.

My opening question (about DevOps) completely changed the dynamic. Throughout our entire presentation, I was the first presenter ready to collaborate with them in real time about their technology environment. They were not looking for answers; they wanted a discussion about the dynamics of the market with an expert who was also in the field.

We went back and forth about DevOps, OpenStack, and cloud technologies for the next hour. For some points, I was the expert with specific technical details. For others, they shared their deep expertise and challenges on running a top Web property. It was a conversation in which Dell demonstrated we had the collaboration and innovation that this customer was looking for in a technology partner.

When my slot was over, they left the next speaker standing alone following me out of the room to continue the discussion. It was not the product that excited them; it was that had I addressed them according to their internal cultural norms, and immediately they noticed the difference.
What is DevOps?

DevOps (from merging Development and Operations) is a paradigm shift for information technology. Our objective is to eliminate the barriers between creating software and delivering it to the data center. The result is that value created by software engineers gets to market more quickly with higher quality.

This level of reaction caught us by surprise at the time, but it makes perfect sense looking back with a cultural lens. It wasn’t that Rob was some sort of superstar—those who know him know that he’s too mild-mannered for that (according to Brad, at least). What has caused the excitement was Rob had hit their cultural engagement hot button!

Our point of view: About the authors

Rob Hirschfeld and Brad Szollose are both proud technology geeks, but they’re geeks from different generations who enjoy each other’s perspective on this brave new world.

Rob is a first-generation Digital Native. He grew up in Baltimore reprogramming anything with a keyboard—from a Casio VL-Tone and beyond. In 2000, he learned about server virtualization and never looked back. In 2008, he realized his teen ambition to convert a gas car to run electric (a.k.a. Today, from his Dell offices and local coffee shops, he creates highly disruptive open source cloud technologies for Dell’s customers.

Brad is a Cusp Baby Boomer who grew up watching the original Star Trek series, secretly wishing he would be commanding a Constitution Class Starship in the not-too-distant future. Since that would take a while, Brad became a technology-driven creative director who cofounded one of the very first Internet development agencies during the dot-com boom. As a Web pioneer, Brad was forced to invent a new management model that engaged the first wave of Digital Workers. Today, Brad helps organizations like Dell close the digital divide by understanding it as a cultural divide created by new tech-savvy workers … and customers.

Beyond the fun of understanding each other better, we are collaborating on this white paper for different reasons.

  • Brad is fostering liquid leaders who have the vision to span cultures and to close the gap between cultures.
  • Rob is building communities with the vision to use cloud products that fit the Digital Native culture.

Kevin HoustonWhy Dell’s PowerEdge VRTX is Ideal for Virtualization

I recently had a customer looking for 32 Ethernet ports on a 4 server system to drive a virtualization platform.  At 8 x 1GbE per compute node, this was a typical VMware virtualization platform (they had not moved to 10GbE yet) but it’s not an easy task to perform on blade servers – however the Dell PowerEdge VRTX is an ideal platform, especially for remote locations.

VRTX_Max_NICsThe Dell PowerEdge VRTX infrastructure holds up to 4 compute nodes and allows for up to 8 x PCIe cards.  The unique design of the Dell PowerEdge VRTX allows a user to run up to 12 x 1GbE NICs per server by using a 4 x 10GbE Network Daughter Card on the Dell PowerEdge M620 blade server and then adding in two 4-port 1GbE NICs into the PCIe slots.  The 4 x 1GbE NICs via the LAN on Motherboard plus 8 x 1GbE ports via the PCIe cards offers a total of 12 x 1GbE NICs – per compute node (see image for details) – which should be more than enough for any virtualization environment.  As an added benefit, since the onboard LOM is a 1/10GbE card users will be able to seamlessly upgrade to 10GbE by simply replacing the 1GbE switch with a 10GbE when it becomes available later this year.

If you have a remote environment, or even a project that needs dedicated server/storage/networking, I encourage you to take a look at the Dell PowerEdge VRTX.  It’s pretty cool, and odds are, your Dell rep can help you try one out at no charge.

For full details on the Dell PowerEdge VRTX, check out this blog post I wrote in June 2013.


Kevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Rob HirschfeldPatchwork Onion delivers stability & innovation: the graphics that explains how we determine OpenStack Core

This post was coauthored by the DefCore chairs, Rob Hirschfeld & Joshua McKenty.

The OpenStack board, through the DefCore committee, has been working to define “core” for commercial users using a combination of minimum required capabilities (APIs) and code (Designated Sections).  These minimums are decided on a per project basis so it can be difficult to visualize the impact on the overall effect on the Integrated Release.

Patchwork OnionWe’ve created the patchwork onion graphic to help illustrate how core relates to the integrated release.  While this graphic is pretty complex, it was important to find a visual way to show how different DefCore identifies distinct subsets of APIs and code from each project.  This graphic tries to show how that some projects have no core APIs and/or code.

For OpenStack to grow, we need to have BOTH stability and innovation.  We need to give clear guidance to the community what is stable foundation and what is exciting sandbox.  Without that guidance, OpenStack is perceived as risky and unstable by users and vendors. The purpose of defining “Core” is to be specific in addressing that need so we can move towards interoperability.

Interoperability enables an ecosystem with multiple commercial vendors which is one of the primary goals of the OpenStack Foundation.

Ecosystem OnionOriginally, we thought OpenStack would have “core” and “non-core” projects and we baked that expectation into the bylaws.  As we’ve progressed, it’s clear that we need a less binary definition.  Projects themselves have a maturity cycle (ecosystem -> incubated -> integrated) and within the project some APIs are robust and stable while others are innovative and fluctuating.

Encouraging this mix of stabilization and innovation has been an important factor in our discussions about DefCore.  Growing the user base requires encouraging stability and growing the developer base requires enabling innovation within the same projects.

The consequence is that we are required to clearly define subsets of capabilities (APIs) and implementation (code) that are required within each project.  Designating 100% of the API or code as Core stifles innovation because stability dictates limiting changes while designating 0% of the code (being API only) lessens the need to upstream.  Core reflects the stability and foundational nature of the code; unfortunately, many people incorrectly equate “being core” with the importance of the code, and politics ensues.

To combat the politics, DefCore has taken a transparent, principles-based approach to selecting core.   You can read about in Rob’s upcoming “Ugly Babies” post (check back on 8/14) .

Rob Hirschfeld7 Open Source lessons from your English Composition class

We often act as if coding, and especially open source coding, is a unique activity and that’s hubris.   Most human activities follow common social patterns that should inform how we organize open source projects.  For example, research papers are very social and community connected activities.  Especially when published, written compositions are highly interconnected activities.  Even the most basic writing builds off other people’s work with due credit and tries create something worth being used by later authors.

Here are seven principles to good writing that translate directly to good open source development:

  1. Research before writing – take some time to understand the background and goals of the project otherwise you re-invent or draw bad conclusions.
  2. Give credit where due – your work has more credibility when you acknowledge and cross-reference the work you are building on. It also shows readers that you are not re-inventing.
  3. Follow the top authors – many topics have widely known authors who act as “super nodes” in the relationship graph. Recognizing these people will help guide your work, leads to better research and builds community.
  4. Find proof readers – All writers need someone with perspective to review their work before it’s finished. Since we all need reviewers, we all also need to do reviews.
  5. Rework to get clarity – Simplicity and clarity take extra effort but they pay huge dividends for your audience.
  6. Don’t surprise your reader – Readers expect patterns and are distracted when you don’t follow them.
  7. Socialize your ideas – the purpose of writing/code is to make ideas durable. If it’s worth writing then it’s worth sharing.  Your artifact does not announce itself – you need to invest time in explaining it to people and making it accessible.

Thanks to Sean Roberts (a Hidden Influences collaborator) for his contributions to this post.  At OSCON, Sean Roberts said “companies should count open source as research [and development investment]” and I thought he’s said “…as research [papers].”  The misunderstanding was quickly resolved and we were happy to discover that both interpretations were useful.

Rob HirschfeldBack of the Napkin to Presentation in 30 seconds

I wanted to share a handy new process for creating presentations that I’ve been using lately that involves using cocktail napkins, smart phones and Google presentations.

Here’s the Process:

  1. sketch an idea out with my colleagues on a napkin, whiteboard or notebook during our discussion.
  2. snap a picture and upload it to my Google drive from my phone,
  3. import the picture into my presentation using my phone,
  4. tell my team that I’ve updated the presentation using Slack on my phone.

Clearly, this is not a finished presentation; however, it does serve to quickly capture critical content from a discussion without disrupting the flow of ideas.  It also alerts everyone that we’re adding content and helps frame what that content will be as we polish it.  When we immediately position the napkin into a deck, it creates clear action items and reference points for the team.

While blindingly simple, having a quick feedback loop and visual placeholders translates into improved team communication.

Rob HirschfeldThe Upstream Imperative: paving the way for content creators is required for platform success

Since content is king, platform companies (like Google, Microsoft, Twitter, Facebook and Amazon) win by attracting developers to build on their services.  Open source tooling and frameworks are the critical interfaces for these adopters; consequently, they must invest in building communities around those platforms even if it means open sourcing previously internal only tools.

This post expands on one of my OSCON observations: companies who write lots of code have discovered an imperative to upstream their internal projects.   For background, review my thoughts about open source and supply chain management.

Huh?  What is an “upstream imperative?”  It sounds like what salmon do during spawning then read the post-script!

Historically, companies with a lot of internal development tools had no inventive to open those projects.  In fact, the “collaboration tax” of open source discouraged companies from sharing code for essential operations.   Historically, open source was considered less featured and slower than commercial or internal projects; however, this perception has been totally shattered.  So companies are faced with a balance between the overhead of supporting external needs (aka collaboration) and the innovation those users bring into the effort.

Until recently, this balance usually tipped towards opening a project but under-investing in the community to keep the collaboration costs low.  The change I saw at OSCON is that companies understand that making open projects successful bring communities closer to their products and services.

That’s a huge boon to the overall technology community.

Being able to leverage and extend tools that have been proven by these internal teams strengthens and accelerates everyone. These communities act as free laboratories that breed new platforms and build deep relationships with critical influencers.  The upstream savvy companies see returns from both innovation around their tools and more content that’s well matched to their platforms.

Oh, and companies that fail to upstream will find it increasingly hard to attract critical mind share.  Thinking the alternatives gives us a Windows into how open source impacts past incumbents.

That leads to a future post about how XaaS dog fooding and “pure-play” aaS projects like OpenStack and CloudFoundry.

Post Script about Upstreaming:

Successful open source projects create a community around their code base in which there are many people using and, ideally, contributing back to the project.  Since each individual has different needs, it’s expected that they will make personal modifications (called “forks”) of the code.   This forking is perfectly normal and usually a healthy part of growing a community.

The problem with forks is that the code diverges between the original (called “trunk” or “master”) source code and the user’s copy.  This divergence can be very expensive to maintain and correct in active projects because the forked code gets stale or incompatible with the other users’ versions.  To prevent this problem, savvy users will make sure that any changes they make get back into the trunk version.   Submitting code from your local (aka downstream) fork back to trunk is called upstreaming.

There’s a delicate balance between upstreaming and forking.  Being too aggressive with upstreaming means that you have to deal with every change in the community, help others adopt/accept your changes and can result in a lot of churn.  Ignoring upstream means that you will ultimately miss out on community advancements in trunk or have a very expensive job to reintegrate your code into trunk.

Hollis Tibbetts (Ulitzer)ARM Server to Transform Big Data to Internet of Things (#IoT)

According to Chris Piedmonte, CEO of Suvola Corporation - a software and services company focused on creating preconfigured and scalable Microserver appliances for deploying large-scale enterprise applications, "the Microserver market is poised to grow by leaps and bounds - because companies can leverage this kind of technology to deploy systems that offer 400% better cost-performance at half the total cost of ownership. These organizations will also benefit from the superior reliability, reduced space and power requirements, and lower cost of entry provided by Microserver platforms".

read more

Kevin Houston7 Lessons Learned From Cisco UCS

Here’s a summary of the lessons learned with Cisco UCS from the Cisco LIVE 2014 session titled, “BRKCOM-3010 – UCS: 4+ Years of Lessons Learned by the TAC”.

#1 – Read the Release Notes

It’s a good practice to read the release notes on any updates with UCS, specifically the Mixed Cisco UCS Release Support Matrix.  Also, if you are going to be doing mixed, make sure to also check the “Minimum B/C Bundle…Features” section to ensure you have the right versions for any new features you are adding, otherwise you may get error messages.


#2 – Plan UCS Firmware Upgrades like an Elective Surgery

Before you begin any firmware upgrades, take the time to prepare.  Consider doing a proactive TAC update – let them know you are doing a firmware update so they can point out any reminders.  As mentioned above, consult the release notes.  Also, backup your system and check the compatibility matrices.  If you have any critical or major faults, contact the TAC and get the issues addressed before moving forward with any updates.  There are video guides on how to do upgrades, so consider reviewing them before upgrading.  Finally, check Cisco’s online community and support forums to see how other people are doing with upgrade paths.

According to Cisco, the steps that are most often overlooked in firmware upgrades: not updating the OS drivers to meet the compatibility matrix; forgetting to back up the system prior to upgrade and not upgrading the blade BIOS & Board Controller.  It’s important to carefully consider these recommended planning steps because if you run into issues down the road and Cisco finds out that a driver or firmware is out of the support matrix, they won’t be able to help you move forward until you are in compliance.  Cisco’s recommendation is to use the UCS HW and SW Interoperability Matrix for a reference on what is supported.


#3 – Use Maintenance Windows for UCS Upgrades

Although you could feasibly do upgrades during the day, it’s not worth the risk.  Cisco TAC advises that all upgrades be done in a maintenance window – especially when doing changes to Fabric Interconnects.  Doing updates to one blade is fine, but since everything goes through the Fabric Interconnects, wait until you can get a maintenance window.  Better to be safe than sorry.

#4 – Backup UCSM

Although you have two Fabric Interconnects and redundancy, you still need to back up UCSM.    You have four different options, full state; system configuration, logical configuration and all configuration.  It’s recommended to do a full state (encrypted, and intended for Disaster Recovery.)  The System Configuration option is XML based (not encrypted) but can be used to export into other Fabric Interconnects as needed.  Logical Configuration is similar to the System Configuration but contains details on Service Profiles, VLANs, VSANs, pools & policies.

#5 – Use Fiber Channel Port Channels with Fiber Storage

Individual Fiber Channel uplinks can have high latency issues.  Since the HBAs are given fcid’s based on when they come across via round robin, there is no way of distributing the loads – they are equally distributed.  This becomes a problem with HBAs using accessing the storage a lot, or if you lose a link.  To resolve, you have to manually balance the HBAs.  With Fiber Channel Port Channels, all individual links are seen as one logical link allowing heavy workloads are equally distributed and preventing the loss of one down link from impacting the performance.

#5 – Insure Your A-Side and B-Side Fiber Channel Switches Remain Separated

Many people want to put an ISL between Fiber Channel Switches however the zoning goes to both sides and if a mistake is made on one side, it’ll take out the other.  Also, don’t connect your Fabric Interconnects to two separate Fiber Channel Switches.  Keep FI #1 attached to FC Switch #1 and FI #2 attached to FC Switch #2.


#6 – Don’t Use 3rd Party Transceivers

Pay the premium for Cisco transceivers and avoid unnecessary issues or faults.

#7 – Degraded DIMM Faults May Not Be Accurate

Cisco TAC admitted that Cisco had conservative thresholds for ECC errors on UCS which caused for more alarms than necessary.  These false alarms were fixed in firmware versions 2.2(1b) and 2.1(3c).  If you are experiencing these issues and are outside your maintenance window, you can safely ignore the ‘degraded DIMM’ faults until you upgrade or RMA the degraded DIMM.  Turn on DIMM blacklisting to mark DIMMs with uncorrectable DIMM errors as bad in 2.2(1b).



Kevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Barton GeorgePresenting Cloud at Harvard

In June I got to attend and present at the Harvard University IT Summit.  The one-day summit, which brought together the IT departments from the 12 colleges that make up the University, consisted of talks, panels and breakout sessions.

The day kicked off with a keynote from Harvard Business School professor Clayton Christensen of The Innovator’s Dilemma and “disruptive innovation” fame.  Christensen talked about disruption in business as well as disruption in Higher Ed and its threat to institutions like Harvard.

After the keynote there was a CIO panel featuring the CIOs of the various colleges where they discussed their strategic plans.   When the panel ended the concurrent sessions began.

My talk (see deck above) was near the end of the day and before the final keynote.  I took the attendees through the forces affecting IT in higher education and the value of a cloud brokerage model.   In the last part of my presentation I went over three case studies that involved Dell and the setting up of OpenStack-based clouds in higher education.

All-in-all a great event and I hope be going back again next year.

The exhibit hall at the Harvard IT summit

The exhibit hall at the Harvard IT summit

Extra-credit reading

Rob HirschfeldShare the love & vote for OpenStack Paris Summit Sessions (closes Wed 8/6)


This is a friendly PSA that OpenStack Paris Summit session community voting ends on Wednesday 8/6.  There are HUNDREDS (I heard >1k) submissions so please set aside some time to review a handful.

Robot VoterMY PLEA TO YOU > There is a tendency for companies to “vote-up” sessions from their own employees.  I understand the need for the practice BUT encourage you to make time to review other sessions too.  Affiliation voting is fine, robot voting is not.

If you are interested topics that I discuss on this blog, here’s a list of sessions I’m involved in:



Rob Hirschfeld4 item OSCON report: no buzz winner, OpenStack is DownStack?, Free vs Open & the upstream imperative

Now that my PDX Trimet pass expired, it’s time to reflect on OSCON 2014.   Unfortunately, I did not ride my unicorn home on a rainbow; this year’s event seemed to be about raising red flags.

My four key observations:

  1. No superstar. Past OSCONs had at least one buzzy community super star.  2013 was Docker and 2011 was OpenStack.  This was not just my hallway track perception, I asked around about this specifically.  There was no buzz winner in 2014.
  2. People were down on OpenStack (“DownStack”). Yes, we did have a dedicated “Open Cloud Day” event but there was something missing.  OpenSack did not sponsor and there were no major parties or releases (compared to previous years) and little OpenStack buzz.  Many people I talked to were worried about the direction of the community, fragmentation of the project and operational readiness.  I would be more concerned about “DownStack” except that no open infrastructure was a superstar either (e.g.: Mesos, Kubernetes and CoreOS).  Perhaps, OSCON is simply not a good venue open infrastructure projects compared to GlueCon or Velocity?  Considering the rapid raise of container-friendly OpenStack alternatives; I think the answer may be that the battle lines for open infrastructure are being redrawn.
  3. Free vs. Open. Perhaps my perspective is more nuanced now (many in open source communities don’t distinguish between Free and Open source) but there’s a real tension between Free (do what you want) and Open (shared but governed) source.  Now that open source is a commercial darling, there is a lot of grumbling in the Free community about corporate influence and heavy handedness.   I suspect this will get louder as companies try to find ways to maintain control of their projects.
  4. Corporate upstreaming becomes Imperative. There’s an accelerating upstreaming trend for companies that write lots of code to support their primary business (Google is a primary example) to ensure that code becomes used outside their company.   They open their code and start efforts to ensure its adoption.  This requires a dedicated post to really explain.

There’s a clear theme here: Open source is going mainstream corporate.

We’ve been building amazing software in the open that create real value for companies.  Much of that value has been created organically by well-intentioned individuals; unfortunately, that model will not scale with the arrival for corporate interests.

Open source is thriving not dying: these companies value the transparency, collaboration and innovation of open development.  Instead, open source is transforming to fit within corporate investment and governance needs.  It’s our job to help with that metamorphosis.

Matt DomschOttawa Linux Symposium needs your help

If you have ever attended the Ottawa Linux Symposium (OLS), read a paper on a technology first publicly suggested at OLS, or use Linux today, please consider donating to help the conference and Andrew Hutton, the conference’s principal organizer since 1999.

I first attended OLS in the summer of 2003. I had heard of this mythical conference in Canada each summer, a long way from Austin yet still considered domestic rather than international for the purposes of business travel authorization, so getting approval to attend wasn’t so hard. I met Val on the walk from Les Suites to the conference center on the first morning, James Bottomley during a storage subsystem breakout the first afternoon, Jon Masters while still in his manic coffee phase, and countless others that first year. Willie organized the bicycle-chain keysigning that helped people put faces to names we only knew via LKML posts. I remember meeting Andrew in the ever-present hallway track, and somehow wound up on the program committee for the following year and the next several.

I went on to submit papers in 2004 (DKMS), 2006 (Firmware Tools), 2008 (MirrorManager). Getting a paper accepted meant great exposure for your projects (these three are still in use today). It also meant an invitation to my first exposure to the party-within-the-party – the excellent speaker events that Andrew organized as a thank-you to the speakers. Scotch-tastings with a haggis celebrated by Stephen Tweedie. A cruise on the Ottawa River. An evening in a cold war fallout shelter reserved for Parliament officials with the most excellent Scotch that only Mark Shuttleworth could bring. These were always a special treat which I always looked forward to.

Andrew, and all the good people who helped organize OLS each year, put on quite a show, being intentional about building the community – not by numbers (though for quite a while, attendance grew and grew) – but providing space to build deep personal connections that are so critical to the open source development model. It’s much harder to be angry about someone rejecting your patches when you’ve met them face to face, and rather than think it’s out of spite, understand the context behind their decisions, and how you can better work within that context. I first met many of the Linux developers face-to-face at OLS that became my colleagues for the last 15 years.

I haven’t been able to attend for the last few years, but always enjoyed the conference, the hallway talks, the speaker parties, and the intentional community-building that OLS represents.

Several economic changes conspired to put OLS into the financial bind it is today. You can read Andrew’s take about it on the Indiegogo site. I think the problems started before the temporary move to Montreal. In OLS’s growth years, the Kernel Summit was co-located, and preceded OLS. After several years with this arrangement, the Kernel Summit members decided that OLS was getting too big, that the week got really really long (2 days of KS plus 4 days of OLS), and that everyone had been to Ottawa enough times that it was time to move the meetings around. Cambridge, UK would be the next KS venue (and a fine venue it was). But in moving KS away, some of the gravitational attraction of so many kernel developers left OLS as well.

The second problem came in moving the Ottawa Linux Symposium to Montreal for a year. This was necessary, as the conference facility in Ottawa was being remodeled (really, rebuilt from the ground up), which prevented it from being held there. This move took even more of the wind out of the sails. I wasn’t able to attend the Montreal symposium, nor since, but as I understand it, attendance has been on the decline ever since. Andrew’s perseverance has kept the conference alive, albeit smaller, at a staggering personal cost.

Whether or not the conference happens in 2015 remains to be seen. Regardless, I’ve made a donation to support the debt relief, in gratitude for the connections that OLS forged for me in the Linux community. If OLS has had an impact in your career, your friendships, please make a donation yourself to help both Andrew, and the conference.

Visit the OLS Indigogo site to show your respect.

Mark CathcartDishwasher Trouble

PhotoGrid_1406940761841This post if for all those people that came to my house for dinner over the last 8-years, especially the epic Of By For movie premier. Many times friends have followed my lead and we’ve cleared up and washed and dried dishes by hand.

I’m OK with that, I don’t make much mess, so rather than waste water and electric, I do them by hand. At least at the Of By For dinner, which was a mammoth day and a half prep and cooking extravaganza. A number of the guests, Kelley, Tammy, Bree, Bekah, Maria and others tidied up, loaded up the dishwasher and we switched it on and grrrrrrrr. Nothing.

PhotoGrid_1389587276525I’ve never used it since I bought the house. It may have never been used. So, it’s been months, and I can whole hardheartedly recommend Mr Appliance for the repair. for $89 the repair guy came to the house this afternoon, switched on the washer, listened, reached under the sink and switched on the water and the dishwasher worked great. D’oh…

Oh yeah, Of By For? I was a kickstarter backer.

Kevin HoustonHow Many NICs Do You Use for Virtualization?