dellites.me

Matt Domsch[REPOST] Who am I?

I’ve started blogging again on the Dell TechCenter site, Enterprise Mobility Management section, along with the rest of my team.

Here’s the intro to my first post, “Who am I?”:

The existential question, asked by everyone and everything throughout their lifetimes – who am I? High school seniors choosing a college, college seniors considering grad school or entering the job market, adults in the midst of their mid-life crisis—the question comes far easier than the answer.

In the world of technology, who you are depends on the technology with which you are interacting. On Facebook, you are your quirky personal self, with pictures of your family and vacations you take. On LinkedIn, you are your professional self, sharing articles and achievements that are aligned with your career.

What about on the myriad devices you carry around? On the smartphone in my pocket, I have several personas—personal, business, gamer (my kids borrow my phone), constantly context-switching between them. In the not-too-distant past, people would carry two phones—one for personal use and one for work, keeping the personas separate via physical separation—two personas, two devices.

Read more…

Barton GeorgeBernard Golden: PaaS, Standards and Open Source

Last but not least in my series of four videos from the Cloud Standards Customer Council is an interview with Bernard Golden.  Bernard, who is the VP of strategy at ActiveState, provided an industry perspective talk entitled: What Should PaaS Standards Look Like.  Bernard then sat on the PaaS panel that followed.

I sat down with Bernard and he gave a quick overview of his talk as well as provided his thoughts on OpenStack and its need for focus.  Take a listen:

Extra-credit reading

Pau for now…


Barton GeorgeAzure architect talks about Kubernetes and the future of PaaS

Here is the third of four interviews that I conducted last week at the Cloud Standards Customer Council.  The theme of the conference was “preparing for the post-IaaS phase of cloud adoption” and there was quite a bit of talk around the role that PaaS would play in that future.

The last session of the morning, before we broke for lunch, was a panel centered around Current and Future PaaS Trends.   After the panel ended I sat down with panelist John Gossman, architect on Microsoft Azure.  John, an app developer by origin, focuses on the developer’s experience on the cloud.

Below John talks working with Google on Kubernetes and getting it to work on Azure as well as the potential future of PaaS as a runtime that sits on top of IaaS.

Stay tuned for my next post when I will conclude my mini series from the Cloud Standards Customer Council meeting with an interview with Bernard Golden.

Extra-credit reading

Pau for now…


Ryan M. Garcia Social Media LawMy Awesome Announcement

I hate tooting my own horn but this is one of the proudest moments in my still short social media law career.  Please forgive the somewhat staged presentation but those who know me know that if I’m going to tell a story I need to make it interesting.

I was at the University of Texas Co-op’s law school location last week browsing the Nutshell books.  (Go with me, people.)  For those of you not in the legal profession, congrats on that by the way, know that the Nutshell series is put out by West Academic (one of the biggest names, if not the biggest name, in the legal publishing world) and is a fantastic resource for an overview of legal issues in a particular topic.  They aren’t casebooks–larger books with often edited cases to look at judicial rulings on certain areas.  Nutshells get right to the point and provide essential information on the overall legal topic.  I used more than one when I was in law school and as a practicing attorney.

But I noticed something was missing from the Nutshell section.  Can you spot it?

Can you spot what's missing?

Can you spot what’s missing?

That’s right, there’s no Social Media Law in a Nutshell.

Let’s fix that, shall we?

I’m proud to announce that I will be writing Social Media Law in a Nutshell for West Academic.  My co-author, Thaddeus Hoffmeister, is a professor of law at the University of Dayton School of Law and has previously published a book on social media in the courtroom.  His knowledge of social media litigation, evidence uses, and applicability in criminal cases will combine with my information on the marketing, content, employment and other social media uses to make this a comprehensive review of social media across all legal channels.

Doing this as a Nutshell book feels perfect right now.  There isn’t a wealth of case law on social media issues, but there are certainly cases out there.  In some areas the most fascinating legal issues are taking place outside of a courtroom so a Nutshell allows us to cover those topics in ways a casebook couldn’t.  Plus, when the movie rights get picked up we all agree that Hugh Jackman can play me.  He’s just a more talented and better looking version of me who can also sing and dance and has a better accent.  The resemblance is uncanny.

I’m not sure when the book will be released but it certainly won’t be until 2015 at the earliest.  Rest assured I’ll let you all know as the process unfolds.

Yesterday I published the 100th blog post here on SoMeLaw Thoughts.  When I look back at how much has changed in social media since I started writing about it, not just my own professional involvement, it’s staggering.  I feel incredibly lucky to take this journey and contribute to the field as well as participate in a line of books that I personally value.  To join the ranks of the Nutshell books blows my mind.

Thanks to all of my readers and friends on social media who have pushed/pulled/heckled me along the way.  An even bigger thanks to my family for putting up with my little side projects.

Now, if you’ll excuse me, I’ve got some writing to do.


William LearaA Tour of the Intel BITS

Burt and Josh Triplett have created a nifty tool for validating that a BIOS has successfully configured Intel resources such as:

  • MSRs
  • C-states
  • P-states
  • power management reference code
  • select ACPI tables
  • SMI frequency and latency
  • microcode

From the website:

The Intel BIOS Implementation Test Suite (BITS) provides a bootable pre-OS environment for testing BIOSes and in particular their initialization of Intel processors, hardware, and technologies. BITS can verify your BIOS against many Intel recommendations. In addition, BITS includes Intel's official reference code as provided to BIOS, which you can use to override your BIOS's hardware initialization with a known-good configuration, and then boot an OS.

The application runs equally well on either UEFI or Legacy BIOS systems.  I successfully ran the utility on both Nehalem (Legacy-based) and Sandy Bridge (UEFI-based) systems.

Snapshot_20140923_232006

Setting up the tool and launching it took literally four minutes.  In the download is an .ISO file.  Installing the tool is simply burning the .ISO to either a CD or a USB flash drive.  The INSTALL.TXT file gives the Linux dd command to accomplish this.  For Windows, I used the excellent Rufus tool on a USB flash drive with the following settings:

image

I then was able to legacy boot to the USB flash drive which booted directly into the BITS tool.  Here’s a sample run (1:58):

BITS version 1090 on a Sandy Bridge system

The BITS tool can be run via a simple text menu system, or also can be controlled via a Python scripting interface.  There is a configure menu that allows the user to modify the assumptions in the tests.  The verbosity of the tests can be adjusted.  There is a facility for saving the test logs so they can be analyzed off-line.  The source code is also available for users to download, modify, and compile themselves.

The download is free.  The website has a good screenshot tour of the capabilities; more detailed documentation is found in the download.  I think the tool will be a great help to BIOS developers.  At a minimum it can be the start of a conversation with your Intel FAE over what constitutes the “correct” MSR settings, microcode, reference code, etc.

Rob HirschfeldCloud Culture: Becoming L33T – Five ways to “go digital native” [Collaborative Series 7/8]

Subtitle: Five keys to earn Digital Natives’ trust

This post is #7 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

WARNING: These are not universal rules! These are two cultures. What gets high scores for Digital Natives is likely to get you sacked with Digital Immigrants.

How do Digital Natives do business?

You've gotta deal with itYou don’t sell! You collaborate with them to help solve their problems. They’ll discredit everything say if you “go all marketing on them” and try to “sell them.”

Here are five ways that you can build a two-way collaborative relationship instead of one-way selling. These tips aren’t speculation: Brad has proven these ideas work in real-world business situations.

Interested in Digital Native Culture?  We recommend reading (more books):

1) Share, don’t tell.

Remember the cultural response in Rob’s presentation discussed in the introduction to this paper? The shift took place because Rob wanted to share his expertise instead of selling the awesomeness of his employeer. This is what changed the dynamic.

In a selling situation, the sales pitch doesn’t address our client’s needs. It addresses what we want to tell them and what we think they need. It is a one-way conversation. And if someone has a choice between saying “yes” or “no” in a sales meeting, a client can always have the choice to say “no.”

Sharing draws our customers in so we can hear their problems and solve them. We can also get a barometer on what they know versus what they need. When Rob is presenting to a customer, he’s qualifying the customer too. Solutions are not one size fits all and Digital Natives respect you more for admitting this to them.

Digital Native business is about going for a long-term solution-driven approach instead of just positioning a product. If you’ve collaborated with customers and they agree you’ve got a solution for them then it’s much easier to close the sale. And over the long term, it’s a more lucrative way to do business.

2) Eliminate bottlenecks.

Ten years ago, IT departments were the bottleneck to getting products into the market. If customers resisted, it could take years to get them to like something new. Today, Apple introduces new products every six month with a massive adoption rate because Digital Natives don’t wait for permission from an authority.

The IT buyer has made that sales cycle much more dynamic because our new buyers are Digital Natives. Where Digital Immigrants stayed entrenched in a process or technology, Digital Natives are more willing to try something unproven. Amazon’s EC2 public cloud presented a huge challenge to the authority of IT departments because developers were simply bypassing internal controls. Digital Natives have been trained to look for out-of-the-box solutions to problems.

Time-to-market has become the critical measure for success.

We now have IT end-user buyers who adopt and move faster through the decision process than ever before! We interfere with their decision process if we still treating new buyers as if they can’t keep up and we have to educate them.

Today’s Digital Workers are smart, self-starters who more than understand technology; they live it. Their intuitive nature toward technology and the capacity to use it without much effort has become a cultural skill set. Also they can look up, absorb, and comprehend products without much effort. They did their homework before we walked in the door.

Digital Natives are impatient. They want to skip over what they know and get to real purpose and collaboration. You add bottlenecks when you force them back into a traditional decision process that avoids risk; instead, they are looking to business partners to help them iterate and accelerate.

 How did this apply to the Crowbar project?

Crowbar addresses a generation’s impatience to be up and running in record time. But there is more to it than that: we engage with customers differently too. Our open source collaboration and design flexibility mean that we can dialog with customers and partners to figure out the real wants and needs in record time.

3) Let go of linear.

Digital Natives do not want to be walked through detailed linear presentations. They do want the information but leave out the hand holding. The best strategy is to prepare to be a well-trained digital commando—plan a direction, be confident, be ready to respond, and be willing to admit knowledge gaps. It’s a strategy without a strategy.

Ask questions at the beginning of a meeting—this becomes a knowledge base “smell test.” Listening to what our clients know and don’t know gets us to the heart and purpose of why we are there. Take notes. Stay open to curve balls, tough questions, and—dare we say it—the client telling us we are off base. You should not be surprised at how much they know.

For open source projects at Dell (Rob’s Employeer), customers have often downloaded and installed the product before they have talked to the sales team. Rob has had to stop being surprised when they are better informed about our offerings than our well trained internal teams. Digital Natives love collecting information and getting started independently. This completely violates the normal linear sales process; instead, customers enter more engaged and ready if you can be flexible enough to meet them where they already are.

4) Be attentively interactive.

No one likes to sit in one meeting after another. Why are meetings boring? Meetings should be engaging and collaborative; unfortunately, most meetings are simply one-way presentations or status updates. When Digital Natives interrupt a presentation, it may mean they are not getting what they want but it also means they are paying attention.

Aren’t instant messaging, texting, and tweeting attention-stealing distractions?

Don’t confuse IMing, texting, emailing, and tweeting as lack of attention or engagement.

Digital Natives use these “back channels” to speed up knowledge sharing while eliminating the face-to-face meeting inertia of centralized communication.

Of course, sometimes we do check out and stop paying attention.

Time and attention are valuable commodities!

With all the distractions and multi-tasking for speed and connectivity, giving someone undivided attention is about respect, and paying attention is not passive! When we ask questions, it shows that we’re engaged and paying attention. When we compile all the answers from those questions, our intention leads us to solutions. Solving our client’s problems is about getting to the heart of the matter and becomes the driving force behind every action and solution.

Don’t be afraid to stray from the agenda—our attention is the agenda.

5) Stay open to happy accidents.

In Brad’s book, Liquid Leadership, the chapter titled “Have Laptop. Will Travel” points out how Digital Natives have been trained in virtualized work habits because they are more effective.

Our customers are looking for innovative solutions to their problems and may find them in places that we do not expect. It is our job to stay awake and open to solution serendipity. Let’s take this statement out of our vocabulary: “That’s not how we do it.” Let’s try a new approach: “That isn’t traditionally how we would do it, but let us see if it could improve things.”

McDonald’s uses numbers for their combo meals to make sure ordering is predictable and takes no more than 30 seconds. It sounds simple, but changes come from listening to customers’ habits. We need to stop judging and start adapting. Imagine a company that adapts to the needs of its customers?

Sales guru Jeffery Gitomer pays $100 in cash to any one of his employees who makes a mistake. This mistake is analyzed to figure out if it is worthy of application or to be discarded. He doesn’t pay $100 if they make the same mistake twice. Mistakes are where we can discover breakthrough ideas, products, and methods.

Making these kinds of leaps requires that we first let go of rigid rules and opinions and make it OK to make a few mistakes … as long as we look at them through a lens of possibility. Digital Natives have spent 10,000 hours playing learning to make mistakes, take risks, and reach mastery.


Ryan M. Garcia Social Media Law13 Quick Thoughts About The iPhone 6 Plus

You wouldn't like the iPhone 6 Plus when it gets angry.

You wouldn’t like the iPhone 6 Plus when it gets angry.

Because social media and mobile technology are so well connected and because I didn’t want to post a long thing on Facebook, here are some quick thoughts from my own use of my new iPhone 6 Plus.  Most of these are answers to questions I’ve been asked.  If you have some questions, fire away.

  1. Yes, it’s big.  When you put it next to an older model iPhone it seems gigantic. Shockingly, once you start using it away from your old phone it does seem a bit bigger but not much.  I believe a similar technique will be used to shrink Paul Rudd in the Ant-Man movie.
  2. Yes, it fits in my pocket.  Both my jeans pocket (but I don’t wear skinny jeans because a-I’m not skinny and b-ew) and my shirt pocket.  When it sits in my shirt pocket the top bit including the camera does stick out so it might concern people that I’m filming them as I walk by.  Which I’m not.  Probably.
  3. I have no idea if it fits in a suit jacket pocket.  What’s a suit jacket?  I live in Austin and I’m in-house counsel.  That means I’m forbidden by two sets of laws to wear a suit.  Same with this sports coat people mention.
  4. Yes, I can use it one handed.  And that’s without doing the double tap to bring the top stuff down to the bottom, although that helps too.  I don’t know if it’s because I have large hands (I never thought I did) or if it’s because I grew up playing arcade games in the 80s (which required you to dislocate three fingers to play Defender for more than 3 levels–and don’t get me started on my finger speed thanks to Track & Field).  Either way, I can use it just fine with one hand.  A bit slower than the iPhone 5 but that could be the size or just getting used to it.
  5. Set up was super easy.  Maybe it’s because I’m used to switching Apple phones, but the old back-up with encryption (to keep your passwords) and restore from backup worked flawlessly.  I did have a slight hiccup getting the phone to active (you have to call an 866 number for AT&T) and then there was a weird iMessage bug (solved by turning iMessage off and back on, IT Crowd for the win!).
  6. It’s actually faster to use with two hands.  I didn’t think about this but maybe it does show my hands are that big.  I was never able to use my previous iPhones with two hands.  My hands just got in the way–at least to make it any faster than using it one handed.  But now there is plenty of room to navigate so I can move faster with two hands typing.  That’s pretty neat.
  7. The predictive keyboard is very cool.  Having a few options available is nice and it seems to make that damn autocorrect less intrusive.  I hope this doesn’t mean Damn You Autocorrect is going away because those are the best.  My favorite feature–if someone sends you a message with two options (e.g., “This or that?”) then without typing a character the predictive keyboard will give you the choice of “This” or “That.”  Nice touch.
  8. Jitterbug mode sucks, will hopefully improve over time.  You know Jitterbug, the smart phone for “aging Americans?”   Using an iPhone app that hasn’t been redesigned for the 6 Plus’ screen feels a bit like using an app on Jitterbug.  Suddenly everything is blown up to silly levels as iOS scales the apps to fill the space rather than give a big black border like the first iPads did.  My Good For Enterprise app still shows 3.5 emails on the Inbox view only now each one is massive.  Compare that to the native Mail app that shows 6.5 in highly legible type.  I know it’s just a matter of time before the main apps I use update (Good, Facebook, Twitter, etc.) but that can’t come fast enough.
  9. The battery rocks.  This may be a combination of leaving autobrightness on (which works much better than it did with my 5–I constantly had it on brightest mode for most of the day) but I noticed it yesterday when I cranked the brightness as well (before realizing I didn’t need to).  Right now my battery is sitting at 75%.  I’ve had typical usage of it today, perhaps a bit less than others.  But on most days my iPhone 5 would be at 20%-30% by the end of lunch.  75% is amazing.
  10. I still haven’t played with the camera.  I look forward to having fun with slow motion and burst photos and all that, but I’m not a good photographer and I take pictures when needed.  Like if there’s a funny sign.
  11. Native HD screen rocks!  The biggest draw for the Plus over the basic 6 was, for me, the native HD screen.  This screen has all the pixels of a 1080p video stream.  Every other phone has to squeeze it down a bit.  Every other iPad with more pixels is just stretching the image out.  This is fantastic for someone like me who only uses my iPad to watch movies and read comics.  Now I don’t need that for movies (and I’m hoping Comixology gives us full page view on the 6 Plus soon).  I watched some Netflix on it last night and that was awesome.  Holding a 5.5″ screen a foot away from my eyes seems larger than my living room TV which is 57″ but ten feet away.
  12. I’m hopeful this is my iPad replacement.  I don’t like travelling with two devices or having some games/apps on my iPad while my essential stuff is on my iPhone.  My goal is for this device to replace my old phone (check) and my iPad (let’s see, but so far so good).
  13. There is no item 13.  But congrats on making it to the end.

Mark CathcartGaining a better city view

10584048_10152680088100530_6382819846875428430_nMckinney Texas is a great city, it contains all the best things about Texas towns and architecture.

Now I’m delighted to say they’ve adopted an become a reference for a number of our products. You can read the full solutions brief here.

The city deployed Foglight Application Performance Monitoring (APM) and Toad Data Modeler from Dell Software to increase application visibility, speed troubleshooting and improve integration giving

  • Better visibility into the city’s critical
    web and legacy applications
  • More-modern applications and services for employees and residents
  • Faster diagnosis and problem resolution
  • Proactive troubleshooting
  • Stronger integration of disparate applications
  • Ability to get more out of existing infrastructure

Yes, the picture is of McKinney Falls State Park, it’s not a city property.


Barton GeorgeThe Future of PaaS, its “value proposition” and Docker

At last week’s Cloud Standards Customer Council held in Austin Texas, the first panel of the day dealt with “Current and Future PaaS Trends.”   The panel debated whether there should or could be a PaaS standard as well as what its future might look like.

One of the panelists was Diane Mueller, community manager of OpenShift Origin.  I grabbed some time with Diane after the panel and got her to share her thoughts on the viability of a PaaS standard and how she saw the technology evolving.

Stay tuned for two more posts from last week’s Cloud Standards Customer Council meeting and more PaaS prognostication.

Extra-credit reading

Pau for now…


Rob HirschfeldCloud Culture: Level up – You win the game by failing successfully [Collaborative Series 6/8]

Translation: Learn by playing, fail fast, and embrace risk.

This post is #6 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

It's good to failDigital Natives have been trained to learn the rules of the game by just leaping in and trying. They seek out mentors, learn the politics at each level, and fail as many times as possible in order to learn how NOT to do something. Think about it this way: You gain more experience when you try and fail quickly then carefully planning every step of your journey. As long as you are willing to make adjustments to your plans, experience always trumps prediction.

Just like in life and business, games no longer come with an instruction manual.

In Wii Sports, users learn the basic in-game and figure out the subtlety of the game as they level up. Tom Bissel, in Extra Lives: Why Video Games Matter, explains that the in-game learning model is core to the evolution of video games. Game design involves interactive learning through the game experience; consequently, we’ve trained Digital Natives that success comes from overcoming failure.

Early failure is the expected process for mastery.

You don’t believe that games lead to better decision making in real life? In a January 2010 article, WIRED magazine reported that observations of the new generation of football players showed they had adapted tactics learned in Madden NFL to the field. It is not just the number of virtual downs played; these players have gained a strategic field-level perspective on the game that was before limited only to coaches. Their experience playing video games has shattered the on-field hierarchy.

For your amusement…Here is a video about L33T versus N00B culture From College Humor “L33Ts don’t date N00Bs.”  Youtu.be/JVfVqfIN8_c

Digital Natives embrace iterations and risk as a normal part of the life.

Risk is also a trait we see in entrepreneurial startups. Changing the way we did things before requires you to push the boundaries, try something new, and consistently discard what doesn’t work. In Lean Startup Lessons Learned, Eric Ries built his entire business model around the try-learn-adjust process. He’s shown that iterations don’t just work, they consistently out innovate the competition.

The entire reason Dell grew from a dorm to a multinational company is due to this type of fast-paced, customer-driven interactive learning. You are either creating something revolutionary or you will be quickly phased out of the Information Age. No one stays at the top just because he or she is cash rich anymore. Today’s Information Age company needs to be willing to reinvent itself consistently … and systematically.

Why do you think larger corporations that embrace entrepreneurship within their walls seem to survive through the worst of times and prosper like crazy during the good times?

Gamer have learned that Risk that has purpose will earn you rewards.


William LearaUSB 3.1 Developers Days, Singapore

original announcement:
http://www.usb.org/developers/events/USB31DevDaysSingapore

The USB 3.1 Specification adds a SuperSpeed USB 10Gbps speed mode that uses a more efficient data encoding and will deliver more than twice the effective data through-put performance of existing SuperSpeed USB over enhanced, fully backward compatible USB connectors and cable. The specification extends the existing SuperSpeed mechanical, electrical, protocol and hub definition while maintaining compatibility with existing USB 3.0 software stacks and device class protocols as well as with existing 5Gbps hubs and devices and USB 2.0 products.

The USB Type-C Cable and Connector Specification defines a new USB connector solution that extends the existing set of cables and connectors to enable emerging platform designs where size, performance and user flexibility are increasingly more critical. The specification covers all of the mechanical and electrical requirements for the new connector and cables. Additionally, it covers the functional requirements that enable this new solution to be reversible both in plug orientation and cable direction, and to support functional extensions that designers are looking for in order to enable single-connector platform designs.

The USB Power Delivery Specification defines the use of a sideband communications method used between two connected USB products to discover, configure and manage power delivered across VBUS between USB products with control over power delivery direction, voltage (up to 20V) and current (up to 5A). The USB Power Delivery 2.0 update adds a new communications physical layer that is specific to the USB Type-C cable and connector solution. The specification also extends the definition of Structured Vendor Defined Messages (VDMs) to enable the functional extensions that are possible with the USB Type-C solution.

What:  USB 3.1 Developers Days is an opportunity to review these specifications and engage with experts in a face-to-face setting
When:  The conference will be held November 19-20, 2014
Cost:  Members US $425.00
          
Non-members US $875.00    
Registration will close on Monday, November 3 at 5:00PM US Pacific Time. All attendees MUST be pre-registered as on-site registration will not be available.

Agenda (subject to change):
Day 1:  USB 3.1 (featuring the new USB Type-C connector)

- Registration check-in
- Introduction
- USB 3.1 Architectural Overview
- USB 3.1 Physical and Link Layers
- USB Type-C Functional Requirements
- USB 3.1 Protocol Layer
- USB 3.1 Hub
- USB 3.1 Compliance

Day 2 :  Track One
- USB Cables and Connectors (including USB Type-C)
     * Overview
     * USB Type-C Mechanical requirements and compliance
     * USB Type-C Electrical/EMC requirements and compliance
- USB 3.1 System Design
     * USB 3.1 design and interoperability goals, and design envelope (EQ capability, channel loss budget)
     * System simulation:  reference channels and reference equalizers
     * Key system performance metrics and design trade-offs
     * Design recommendations and trade-offs for package and PCB designs
     * Silicon design considerations, including equalizers and system margining
     * Re-timing repeater design requirements
     * Design to minimize EMI & RFI

Day 2 :  Track Two
- USB Power Delivery 2.0
     * Introduction and Architectural Overview
     * Electrical/Physical Layer
     * Protocol Layer
     * Protocol Extensions (specific to USB Type-C)
     * Device and System Policy
     * Power Supply
     * Compliance

Exhibitor Opportunities
Applications and agreements for exhibitors are now being accepted. Don’t miss out on these opportunities to increase your company’s exposure at this event and demonstrate your company's industry leadership in advancing the USB landscape. 
Where:  Marina Bay Sands
              10 Bayfront Avenue
              Singapore 018956
www.marinabaysands.com
Tel.: +65 6688 8868

Hotel Accommodations
The group room block is at the Marina Bay Sands. To receive the group sleeping room rate of SGD 300 plus tax per night (includes complimentary guestroom internet), attendees should make their reservations by visiting https://resweb.passkey.com/go/USBDevDays by Monday, November 3, 2014. Reservations received after November 3rd are subject to availability and room type and will be offered at the group rate based on availability only. The special group rate is available from November 17th through November 20th. Reservations beyond these dates will be offered at the group rate based on availability.

A major credit card is needed to guarantee guestroom reservations. Any reservation cancellations or changes should be made 14 days prior to arrival to avoid cancellation penalties. If the room is cancelled less than 14 days prior to arrival, the hotel will charge 100% of the agreed room rate for the entire stay to the credit card on file.

Hotel check-in time is 3:00pm. Check-out time is 11:00am.  Early check-in and late check-out are subject to availability. 

Hotel:  Marina Bay Sands
Cut-Off Date:  Monday, November 3, 2014
Group Rate:  SGD 300 plus tax per night

Ryan M. Garcia Social Media LawEight Social Media Rules For Kids

“Mommy? I need more Candy Crush lives!”

Coming up with rules for kids on social media is hard.  First, coming up with any set of rules for kids is hard as any parent can attest.  But when you’re talking about a complicated subject like social media it can be even trickier.  There are very real risks of kids not realizing what’s appropriate or not on social media or not realizing who can see their content.  There are also scary but not true stories about predators seeking kids on social media or other online boogeymen that even if we rationally don’t believe we also don’t want to be the one parent whose child actually faced the monster.

Even though I work in the social media space I hadn’t given the topic of social media rules for kids much thought until a co-worker (hat tip to Gretchen) asked me about it this week.  My boys are too young for any social media platforms and still young enough that their friends aren’t pressuring them to join.  But I know that will change and it will change faster than I want it too.  And while it may be a simple rule to say “No social media until you’re [AGE]!” I also know that social media is as much a part of young culture as it is adult culture.  Banning something isn’t as effective as teaching them the right way to do it.

But for young kids first experiencing social media it’s a huge topic to cover.  In some ways I compare it to driving a car–it’s a tool that everyone uses and it’s important to learn how to use it properly because bad things can happen if you mess around.  But in other ways this is a bad comparison–when a teenager learns to drive they’ve been sitting in a car as a passenger for many, many years.  Children first going online typically haven’t been a backseat passenger to their parents’ online activities so we have to teach them the rules of a road they’ve never been on.

This topic prompted me to post some initial rules for kids on social media which I invited comments on and then revised.  I share them here because it was a good conversation but let me make a few important call-outs.

  • As with any set of rules for kids, these are completely customizable for your family and your children.  I am not saying this is the right way to do it, this is just one way to start thinking about it.
  • The rules are written a bit strongly but that’s because social media is similar to a car that weighs several tons–use it correctly and you’re good.  One bad accident can have serious consequences.  I’m not trying to scare people, I’ve just worked long enough in the space to know better.  I imagine ambulance drivers and emergency room workers have similar conversations with their kids about driving motorcycles.
  • These are basic rules that I want to apply to all platforms but also to trigger a series of conversations about how to use social media.  That’s the basis of rule five. Nobody should think you can give these rules to a child and then they know what to do–this is the foundation for you to teach them about posting appropriate content, providing appropriate responses, and engaging with people they do or do not know in real life.  This is the start of the conversation, not the end or the totality.

That said, here are the Eight Rules.  If you have additions, please leave me a comment below.

  1. This is not your account, this is my account with your name on it.
  2. I will set the password and you will not change it. If the platform requires you to change it then you will come to me and I will change it for you.
  3. I will be monitoring your account. Don’t post or say anything that you don’t want me to see because I will see it. If you’d like something more private I’m happy to buy you a diary and a pen.
  4. When I say I will be monitoring your account I mean that I will be actively watching your account and so will many other people. All of these people, like me, have your best interest in mind when we stop you from doing unwise things.
  5. I understand you’ll be learning how to use social media and that the learning process is a journey so I will be patient and explain the things you should and shouldn’t do. You, in turn, need to understand that there are risks and concerns you can’t comprehend right now so while some of my advice may seem odd you will still need to follow it.
  6. If you ever have a question about posting something, ask me first. Social media is about conversations but it is also very different from the actual conversations you’ve had with family and friends. It takes time to learn but it’s better to ask first than regret later.
  7. I will warn you once before I remove your access to the account. Unless you do something really awful in which case you won’t get a warning. Trying to circumvent these rules (making another account, deleting accounts, etc.) is automatically awful.
  8. If you think these rules are strict just wait until we talk about driving when you’re 16.

Barton GeorgePreparing for the Post-IaaS world

Today I attended a day-long event put on here in Austin by the Cloud Standards Customer Council.  It was a packed agenda focused around the theme “preparing for the post-IaaS phase of cloud adoption.”

Craig Lowery, Sr Distinguished Engineer in Dell Software, chaired the event and gave the opening presentation.  I grabbed some time with Craig during the lunch break to get his thoughts on the event and have him hit the highlights of his presentation.

Take a listen.

Stay tuned next week for three more short interviews from the event around Docker, the future of PaaS and more.

Pau for now…


Rob HirschfeldCloud Culture: Online Games, the real job training for Digital Natives [Collaborative Series 5/8]

Translation: Why do Digital Natives value collaboration over authority?

Kids Today

This post is #5 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Before we start, we already know that some of you are cynical about what we are suggesting—Video games? Are you serious? But we’re not talking about Ms. Pac-Man. We are talking about deeply complex, rich storytelling, and task-driven games that rely on multiple missions, worldwide player communities, working together on a singular mission.

Leaders in the Cloud Generation not just know this environment, they excel in it.

The next generation of technology decision makers is made up of self-selected masters of the games. They enjoy the flow of learning and solving problems; however, they don’t expect to solve them alone or a single way. Today’s games are not about getting blocks to fall into lines; they are complex and nuanced. Winning is not about reflexes and reaction times; winning is about being adaptive and resourceful.

In these environments, it can look like chaos. Digital workspaces and processes are not random; they are leveraging new-generation skills. In the book Different, Youngme Moon explains how innovations looks crazy when they are first revealed. How is the work getting done? What is the goal here? These are called “results only work environments,” and studies have shown they increase productivity significantly.

Digital Natives reject top-down hierarchy.

These college educated self-starters are not rebels; they just understand that success is about process and dealing with complexity. They don’t need someone to spoon feed them instructions.

Studies at MIT and The London School of Economics have revealed that when high-end results are needed, giving people self-direction, the ability to master complex tasks, and the ability to serve a larger mission outside of themselves will garnish groundbreaking results.

Gaming does not create mind-addled Mountain Dew-addicted unhygienic drone workers. Digital Natives raised on video games are smart, computer savvy, educated, and, believe it or not, resourceful independent thinkers.

Thomas Edison said:

“I didn’t fail 3,000 times. I found 3,000 ways how not to create a light bulb.”

Being comfortable with making mistakes thousands of times ’til mastery sounds counter-intuitive until you realize that is how some of the greatest breakthroughs in science and physics were discovered.  Thomas Edison made 3,000 failed iterations in creating the light bulb.

Level up: You win the game by failing successfully.

Translation: Learn by playing, fail fast, and embrace risk.

Digital Natives have been trained to learn the rules of the game by just leaping in and trying. They seek out mentors, learn the politics at each level, and fail as many times as possible in order to learn how NOT to do something. Think about it this way: You gain more experience when you try and fail quickly then carefully planning every step of your journey. As long as you are willing to make adjustments to your plans, experience always trumps prediction.Just like in life and business, games no longer come with an instruction manual.

In Wii Sports, users learn the basic in-game and figure out the subtlety of the game as they level up. Tom Bissel, in Extra Lives: Why Video Games Matter, explains that the in-game learning model is core to the evolution of video games. Game design involves interactive learning through the game experience; consequently, we’ve trained Digital Natives that success comes from overcoming failure.


Rob HirschfeldTo improve flow, we must view OpenStack community as a Software Factory

This post was sparked by a conversation at OpenStack Atlanta between OpenStack Foundation board members Todd Moore (IBM) and Rob Hirschfeld (Dell/Community).  We share a background in industrial and software process and felt that sharing lean manufacturing translates directly to helping face OpenStack challenges.

While OpenStack has done an amazing job of growing contributors, scale has caused our code flow processes to be bottlenecked at the review stage.  This blocks flow throughout the entire system and presents a significant risk to both stability and feature addition.  Flow failures can ultimately lead to vendor forking.

Fundamentally, Todd and I felt that OpenStack needs to address system flows to build an integrated product.  The post expands on the “hidden influencers” issue and adds an additional challenge because improving flow requires that the community influences better understands the need to optimize work inter-project in a more systematic way.

Let’s start by visualizing the “OpenStack Factory”

Factory Floor

Factory Floor from Alpha Industries Wikipedia page

Imagine all of OpenStack’s 1000s of developers working together in a single giant start-up warehouse.  Each project in its own floor area with appropriate fooz tables, break areas and coffee bars.  It’s easy to visualize clusters of intent developers talking around tables or coding in dark corners while PTLs and TC members dash between groups coordinating work.

Expand the visualization so that we can actually see the code flowing between teams as little colored boxes.  Giving project has a unique color allows us to quickly see dependencies between teams.  Some features are piled up waiting for review inside teams while others are waiting on pallets between projects waiting on needed cross features have not completed.  At release time, we’d be able to see PTLs sorting through stacks of completed boxes to pick which ones were ready to ship.

Watching a factory floor from above is a humbling experience and a key feature of systems thinking enlightenment in both The Phoenix Project and The Goal.  It’s very easy to be caught up in a single project (local optimization) and miss the broader system implications of local choices.

There is a large body of work about Lean Process for Manufacturing

You’ve already visualized OpenStack code creation as a manufacturing floor: it’s a small step to accept that we can use the same proven processes for software and physical manufacturing.

As features move between teams (work centers), it becomes obvious that we’ve created a very highly interlocked sequence of component steps needed to deliver product; unfortunately, we have minimal coordination between the owners of the work centers.  If a feature is needs a critical resource (think programmer) to progress then we rely on the resource to allocate time to the work.  Since that person’s manager may not agree to the priority, we have a conflict between system flow and individual optimization.

That conflict destroys flow in the system.

The number #1 lesson from lean manufacturing is that putting individual optimization over system optimization reduces throughput.  Since our product and people managers are often competitors, we need to work doubly hard to address system concerns.  Worse yet our inventory of work in process and the interdependencies between projects is harder to discern.  Unlike the manufacturing floor, our developers and project leads cannot look down upon it and see the physical work as it progresses from station to station in one single holistic view.  The bottlenecks that throttle the OpenStack workflow are harder to see but we can find them, as can be demonstrated later in this post.

Until we can engage the resource owners in balancing system flow, OpenStack’s throughput will decline as we add resources.  This same principle is at play in the famous aphorism: adding developers makes a late project later.

Is there a solution?

There are lessons from Lean Manufacturing that can be applied

  1. Make quality a priority (expand tests from function to integration)
  2. Ensure integration from station to station (prioritize working together over features)
  3. Make sure that owners of work are coordinating (expose hidden influencers)
  4. Find and mange from the bottleneck (classic Lean says find the bottleneck and improve that)
  5. Create and monitor a system view
  6. Have everyone value finished product, not workstation output

Added Subscript: I highly recommend reading Daniel Berrange’s email about this.


Jason BocheBenQ W1070 and the Universal Ceiling Mount

Over the weekend I hung my first theater projector, the BenQ W1070 1080P 3D Home Theater Projector (White), using the BenQ 5J.J4N10.001 Universal Ceiling Mount, both available of course through Amazon.com. While I didn’t expect the installation to be overly complex, I did employ a slow and methodical planning approach before drilling large holes into the new knockdown theater ceiling.

After unboxing the projector and the universal ceiling mount kit, I looked at the instructions, the parts, and the underside of the projector. If you’re reading this, it’s probably because you’re in the same boat I was in – the diagrams don’t closely resemble the configuration of what you’ve got with the W1070. Furthermore, reading some of the reviews on Amazon seems to suggest this universal ceiling mount kit doesn’t work with the W1070 without some modifications to the mounting hardware. I read tales of cutting and filing as well as adding longer bolts, tubing, and washers to compensate for the placement of the mounting holes on the W1070. Not to worry, none of that excess is needed. If you concentrate more on the written instructions rather than the diagrams for mounting the hardware to the projector, it all actually works and fits together as designed with no modifications necessary. The one exception to this is that not all of the parts provided in the kit are used. This perhaps is what leads to some of the initial confusion in the first place. The diagrams suggest a uniform placement of four (4) mounting brackets on the underside of the projector in a ‘cross’ pattern. While this may be the case for some projectors, it’s not at all a representation of the W1070 integration.

For openers, the BenQ W1070 has only three (3) mounting holes meaning only three (3) mounting brackets will be used and not all four (4). Furthermore, the mounting holes are not placed uniformly around the perimeter of the projector. That, combined with the uneven surface of the projector can lead to uncertainty that these products were meant for each other and if so, then how. Simply follow the directions and screw the three brackets into place while allowing a little give so that you can swing the brackets into a correct position. I say _A_ correct position because there are nearly countless positions in which you can configure them and it will still work correctly resulting in a firm mount to the ceiling.

The image below shows an example of how I configured mine:

Next, place the mounting plate on top of the mounting brackets. Slide the mounting screws in the brackets, and gently swing the brackets themselves, so that the screws can extend through one of the channels in the mounting plate. Gently remove the mounting plate and torque the screws attaching the bracket to the projector.

I took some additional steps which may not have been necessary with modern projector technology but nonetheless the methodical approach helps me sleep better at night and reassures me I’m not destroying my ceiling in the wrong spot. I used a felt tip marker to mark a center point on the projector relative to the telescoping pole that will mount to the plate.

I then temporarily removed the mounting plate to measure the telescoping ceiling mount offset relative to the front and center of the projector lens. This measuring translates into the offset for the ceiling mount relative to the center of the room and distance to the projection wall. Performed correctly, it allowed me to mount the front of the lens 10’10″ from the projection wall (sweet spot for my calculated screen size, seating, zoom, etc.) as well as mount the lens exactly in the middle of the room from a side to side lateral perspective.

In closing, the only other thing I’d add here is that if your lag bolts are not hitting studs in the ceiling, don’t bother with the plastic sheetrock inserts. While they may work, I don’t trust them for the amount of money I spent on the projector and I certainly don’t want the projected image wiggling because the projector isn’t firmly mounted to the ceiling. Only one of my lag bolts hit a stud. For the remaining bolts, I went to home depot and purchased some low cost anchor bolts (these are the ones I used along with a fender washer) good for 100lbs. each. Suffice to say, the projector is now firmly hung from the ceiling.

Post from: boche.net - VMware Virtualization Evangelist

Copyright (c) 2010 Jason Boche. The contents of this post may not be reproduced or republished on another web page or web site without prior written permission.

BenQ W1070 and the Universal Ceiling Mount

Mark CathcartDecaying Texas

It’s been an interesting month. I live in Austin Texas, boom town USA. Everything is happening in construction, although nothing much in transport. In many ways Austin reminds me of rapidly developing cities in China, India and other developing countries. I’ve travelled some inside Texas, but most on I10 and out East. I’ve tended to dismiss what I’ve seen in small towns, mostly because I figured they were unrepresentative.

Earlier this month I did my first real US roadtrip. I had my Mum with me for a month and figured a week or so out of the heat of Texas would be a good thing. We covered 2,500 miles, most up from North West Texas, also New Mexico, and Colorado. On the way back we went via Taos, Santa Fe, and Roswell and then back through West Texas.

There they were small town after small town, decaying. Every now and again you’d drive through a bigger town that wasn’t as bad, but overall massive decay, mostly in the commercial space. Companies had given up, gone bust, or got run out of town by a Walmart 30-50 files away. Even in the bigger ones, there was really no choice, there were Dollar Stores, Pizza Hut, McDonalds or Burger King, Sonic or Dairy Queen, and gas stations. Really not much else, except maybe a Mexican food stop.

It was only just before sunset on the drive back through West Texas, with my Mum asleep in the backseat, I worked out that my camera and telephoto lens rested perfectly between the steering wheel and the dashboard and I started taking pictures. These are totally representative with what I’ve seen all over Texas. Just like the small towns out near Crockett and Lufkin in East Texas; pretty similar to anything over near Midland; outside El Paso; down south towards Galveston.  Decaying Texas.

Click to view slideshow.

What there were plenty of, in the miles and miles of flat straight roads, were oil derricks, and tankers, hundreds upon hundreds of them. It’s not clear to me what Governor Perry means when he talks about the Texas Miracle, but these small towns, and to some degree, smaller cities have more in common with the towns and cities in China and India, slowly being deserted, run down in the rush to the big cities.

Click to view slideshow.

Interestingly, while writing and previewing this entry on wordpress, it suggested the mybigfatwesttexastrip, which ends with the following

The pictures above tell the story of a dying West Texas town and the changing landscape of population movement away from the agrarian society to the city.


William LearaPCI-SIG Compliance Workshop, Taipei, Taiwan

original announcement:

https://www.pcisig.com/events/apaccompliance_workshop/


Dear PCI Developer,

Registration is now open for the PCI-SIG(R) Compliance Workshop #91, which will be held October 28-31, 2014 in Taipei, Taiwan!

Objective

The PCI-SIG Compliance Workshop #91 is held to promote PCI Express(R) specification compliance in the industry with the goals of eliminating interoperability issues and ensuring proper implementation of PCI specifications. Participation provides an opportunity to find and fix problems before release. This saves your company time and resources while offering valuable networking and training opportunities with your fellow engineers. Official testing capabilities for Workshop #91 include PCI Express 3.0 and PCI Express 2.0.

Cost

Attendance at this members-only event is free. Please note that your credit card information will be collected for product registration(s); however, you will not be charged unless you do not bring your product to the event or your product registration is not cancelled by 12noon Friday, September 26, 2014.

Registration Information and Deadlines

Onsite registration is not available. We do not accept onsite product registrations, so you MUST register your product prior to the registration cut-off date of 12noon PT on Friday, September 26, 2014. Your testing schedule will be created based off of the information you provide for your registered product, so please be sure that any changes to your product’s information are completed prior to 12noon PT on Friday, September 26, 2014. No product detail changes may be made after registration has closed as we will be distributing anonymized testing schedules in advance of the event. Name badges and non-anonymized test schedules will be distributed on Tuesday morning from 8:00-8:45am outside the PCI-SIG Hospitality Suite (Room 401).

In case registration exceeds our testing capacities, we have established a reasonable cap for each product type and revision. If these caps are reached during online registration, we will put any additional products on a waiting list and notify the product registrants. Products will be moved from the waiting list to full registration if possible, based on the order of their attempted pre-registration and will be notified the week of October 6.

System Vendors: System vendors are required to bring a laptop to the workshops for use in their Interoperability test suites with a compatible browser (Chrome or FireFox) for wirelessly submitting Interoperability test results electronically to a Hospitality Suite Server. The wireless application will provide a means for saving the test results to a soft copy PDF file in the gold suites and interoperability test suites. Additionally, a URL will be provided along with login information where testers may view their test results and download a soft copy PDF after the workshop. These results are only available until the next scheduled workshop.

You must register your products and reserve your hotel room before the cut-off dates to confirm your space at the event. Hotel reservations will not be accepted after Monday, October 13, 2014 and registration will close 12noon on Friday, September 26, 2014. All members can register and find additional information online at http://www.pcisig.com/events/apaccompliance_workshop/.  

Best Regards,

PCI-SIG Administration
3855 SW 153rd Drive
Beaverton, OR 97003
Phone: (503) 619-0569

Hollis Tibbetts (Ulitzer)To Heck with 'Big Data,' 'Little Data' Is the Problem Most Face

"Big data" gets all the press - but for the vast majority of people who work with data, it's the proliferation of "little data" that impacts us the most. What do I mean by little data? I'm referring to the proliferation of various SaaS and Cloud-based applications, on-premises applications, databases, spreadsheets, log files, data files and so forth. Many organizations are plagued with multiple instances of the same applications or multiple applications from different vendors that do essentially the same thing. These are the applications and data that run today's enterprise - and they're a mess.

read more

William LearaCould This Be The Wrongest Prediction Of All Time?

imageIn yet another fantastic Computer Chronicles episode, Stewart and Gary are this time talking to computer entrepreneurs. The year is 1984.  Among the guests are Gene Amdahl, Adam Osborne, and the co-founder and CEO of Vector Graphic Inc., Lore Harp.

The context is a general discussion about the PC industry, asking where can entrepreneurs successfully innovate, and how is it possible for start-ups to compete with IBM.

Gary’s question to Lore:
I know that you’ve been involved very closely with the whole industry as it’s switched toward IBM hardware; what are your feelings about the PC clones?
…and Lore’s response:
In my opinion, they are not going to have a future …
I don’t think they are going to be a long term solution.
The Computer Chronicles, 1984
Little did she know that IBM would stop being a serious PC competitor within ten years, and would stop selling PCs altogether in twenty.

What fascinates me about this crazy-bad prediction is that she brings up some interesting points, but then manages to come away with the exact wrong conclusion.  Listing her remarks one by one:

1. Clones are not creating any value—putting hardware together and buying software that are available to anyone

That the clone makers were putting together off-the-shelf hardware and software is incontrovertible.  However, the question she should have asked is “why would anyone pay a premium for the same batch of off-the-shelf hardware and software just because it says ‘IBM’ on the front?”  In other words, the off-the-shelfness (I made that word up) of the PC industry was a threat to IBM, not to the clone makers.

2. Clones are not creating anything that makes them proprietary

I guess that was the prevailing business wisdom at the time—you create value by creating something proprietary and lock-in customers to your solution.  What would she think of today’s industry around open source software?

Of course IBM ended up following exactly this strategy themselves—creating a proprietary system:  the PS/2 running OS/2.  The market refused to accept it and to become beholden to one vendor.  In the end, it was actually the PC clone makers lack of proprietary technology that ensured their eventual triumph over IBM.

3. If IBM takes a different turn, software vendors will follow suit, leaving out clone makers

As with her other remarks, this one also turned out to be quite prescient—IBM did indeed take a different turn and created the PS/2 with Micro Channel running OS/2.  But rather than the software vendors following IBM, they abandoned IBM.  Microsoft quit development of OS/2 and bet the company on Windows and Windows NT.  The software industry followed the clone makers, not IBM.

4. Clone makers cannot move as quickly as IBM (?!?!?!) because IBM will have planned their move in advance

What is hilarious about this statement is that of all the myriad things one could say about Big Blue, “moving quickly” is not one of them.  Anyway, as already mentioned, IBM planned their move years in advance and introduced their own proprietary hardware and software system.  The clones moved even quicker and standardized on ISA/EISA and Windows.  The rest is history!


Full episode:  https://archive.org/details/Computer1984_5

Whatever happened to Lore Harp and Vector Graphic?

William LearaAs the Apple ][ Goes, So Goes the iPhone

With the great success of the iPhone comes many illegal knock-off manufacturers.  Sound familiar?  It should—Stewart Cheifet reported the same thing happening to a previous Apple product, the Apple ][ … in 1983!

Checkout the video clip from a 1983 edition of The Computer Chronicles:

William LearaApple iWatch Revealed! (in 1985)

In another great episode of the Computer Chronicles, Stewart and Gary demonstrate a watch-based computer.  In yet another example of “the more things change, the more they stay the same”, Stewart makes the remark:

Is this another example of technology in search of a purpose?

That is the topic still being debated today, thirty years later:  will the Samsung Galaxy Gear, Pebble watch, or the iWatch have real value, or is it just technology for technology’s sake?  Are people willing to carry 1) a smart phone, 2) a computer or tablet, and 3) wear a watch?  It’s great to see how the “next big thing” today is really just another attempt at what was tried thirty years ago.

Is a wrist-computer worthwhile?  Leave a comment with your thoughts!

Full episode:  https://archive.org/details/portablecomp

image

Hollis Tibbetts (Integration)Application Proliferation Accelerates - CIOs Unaware of Impending Integration Headaches

The advancement of technology has led to widespread Cloud application usage throughout businesses and corporations. So widespread that IT is largely caught unaware of the impending Integration (not to mention security, backup/recovery, compliance and governance) headaches that result from such rapid proliferation.

Even without this SaaS and Cloud "explosion", organizations already faced a huge challenge integrating all their legacy and on-premises applications and data sources in order to more optimally run, manage and make critical decisions about the business. Over the past decades, enterprises purchased a large numbers of on-premise software packages to improve both the efficiency and effectiveness of their operations - and in most cases created an un-integrated hairball information and process architecture.

Despite the evolution of various application and software platforms, integration architectures and so forth, enterprises still find themselves unable "catch up" with the rapid growth in applications and data sources - and are therefore unable to take full advantage of all their data.

Business Intelligence expert Gaute Solaas, CEO of software vendor iQumulus comments, "The typical enterprise has thousands of data sources and applications, and there is an increasing number of data-producing devices and entities on the horizon. IT isn't prepared to deal with that - businesses need tools to easily and cost-effectively harness this ever-increasing number of disparate data sets - and enable the productive and meaningful presentation of the resultant information to individuals across the organization."

SaaS and Cloud technologies bring tremendous benefits to the organization; however, everything has a downside - these days, anyone with a credit card and $25 to spend can create a new application and data island. No longer does IT need to be involved - or even aware of its creation. And increasingly IT isn't aware - and that's troubling.

In an era where the concept of "instant gratification" is increasingly being applied to applications and data storage (thanks to SaaS and Cloud), increasingly individuals, small groups, departments and line of business owners are swiping their credit cards and getting "instant" business applications - without regard for the downstream consequences - such as Integration, Business Intelligence, security, compliance and backup/recovery (just because someone else hosts your data doesn't mean it's necessarily safe, secure or even backed up. Many organizations face a major financial risk with SaaS and Cloud applications).

In the rush to take advantage of these easy to procure and deploy application, storage and computing solutions, there is a real consequence - the unknown proliferation of cloud silos across the enterprise.
Unfortunately, SaaS and Cloud vendors are largely resistant to incorporating frameworks such as Dell Boomi (and others) that make their products simple to integrate with existing systems.

Jason Haskins, Data Architect at Alchemy Systems, a rapidly growing international company that delivers innovative technologies and services for the global food industry, has to deal with thousands of different data sources as part of his Business Intelligence data architecture. He anticipates the number of disparate sources could easily double in the next 24 months. "Embracing all these different formats and creating a system with a focus on usability, flexibility and scalability is the key to success in this area. It's typically a big mistake for IT to try to force people to restructure their data or to change the way they do business. By bridging the IT and the business world with a flexible and easy to use system, everybody wins."

Don't expect this trend and the integration headaches to slow down - the burgeoning market for Mobile applications will add fuel to this fire. Chris McNabb, General Manager of Dell Boomi commented, "To take competitive advantage of the cloud, companies are desperately looking for ways to accelerate the development of integration flows between their various cloud, on-premises and mobile applications."

Meanwhile, IT continues to be held responsible for many of the implications resulting from this widespread proliferation. Security, governance and compliance are just the tip of the iceberg. Integrating all these disparate systems to automate processes or build effective Business Intelligence systems is another - and online backup and disaster recovery planning is yet another.

A recent study by Netskope validates this app and data explosion - and how IT is being caught unaware. They found that IT experts misjudged Cloud application usage within their companies by as much as 90%. In the Netskope report, IT professionals estimated that their company only used 40 to 50 applications. The actual number: nearly 400 Cloud applications. And this is in addition to the hundreds to thousands of disparate and often distributed on-premises "legacy" systems in most organizations.

Mark CathcartDell PowerEdge 13g Servers with NFC

Although I have not worked in the server group at Dell for almost 3-years, I was delighted to see in among the innovations announced at yesterdays PowerEdge 13g launch, the Near Field Communications (NFC) concept and prototype I proposed just over 2-years ago.

The enhanced at-the-server management, and from anywhere: Dell introduces iDRAC Quick Sync, using Near Field Communication (NFC), an industry first. And is one example of many that belies the notion, commonly held, that Dell doesn’t innovate.

For customers managing at-the-box, this new capability transmits server health information and basic server setup via a hand-held smart device running OpenManage Mobile, simply by tapping it at the server. OpenManage Mobile also enables administrators to monitor and manage their environments anytime, anywhere with their mobile device.



Mark CathcartLet’s Go do rail like Houston!

Mark Cathcart:

Fantastic write-up on the mechanics and rights and wrongs of Prop-1. Just vote NO. I can’t vote until 2016, make your vote count for both of us.

Originally posted on Keep Austin Wonky:

Advocates for this November’s ‘road and rail’ Proposition 1 would like the electorate to believe the proposed light rail segment will achieve success similar to Houston’s stellar Red Line. Here are the top 3 reasons why they are wrong and why it matters.

rail_opex_ntsd1

Source: National Transit Database. “UPT” means unlinked passenger trip (i.e. boarding). Median values for a category in bold.

 

View original 792 more words


Gina MinksWhat does the death of Twitter mean to online enterprise tech communities?

You probably have heard about the changes Twitter is planning so the timeline can be more “user friendly“. Twitter wants to take the noise out of your timeline, by determining what you should see, much like Facebook does. I think this marks the end of an era. And I’m not alone. In the blog post something is rotten in the state of…Twitter  @bonstewart discusses the ways social is just not what it used to be (the article is

read more here

Kevin HoustonIntroducing the Cisco UCS B200 M4 Blade Server

With today’s announcement of the Intel E5-2600 v3, Cisco announced the UCS B200 M4 Blade Server.  Here’s a quick overview of it.

The 4th generation of the Cisco UCS B200 will offer the following:

CIsco UCS B200 M4 Blade Server

  • Up to 2 x Intel Xeon E5-2600 v3 CPUs
  • 24 DIMM of DDR4 memory delivering speeds up to 2133MHz and maximum capacities of 768GB
  • 2 x hot plug HDD or SSDs
  • Dual SDHC flash card sockets (aka Cisco FlexFlash)
  • Cisco UCS Virtual Interface Card (VIC) 1340: a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) mezzanine adapter.

Model specifics have not been provided at this time, but Cisco has released a datasheet which you can find here.

 

 

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Kevin HoustonHP Announces the BladeSystem BL460c Gen 9

Today HP announced their next 2 socket blade server based on the Intel Xeon E5-2600 v3 CPU, the BL460c Gen 9.   Here’s a quick summary of it.

  • UHP Proliant BL460 Gen9p to 2 x Intel Xeon E5-2600 v3 (up to 18 cores per CPU).
  • 16 x DDR4 DIMM slots providing up to 512GB of RAM
  • Support for up to 2 x 12Gb/s SAS HDD or SSD

I wish I could provide more information, but unfortunately HP didn’t market this new product beyond this blog post so I don’t have any additional details including when it will be available to order.  As I get them, I’ll update this blog post, so check back in the future.

 

 

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Kevin HoustonIntel Announces the Xeon E5-2600 v3 CPU

Today Intel announced the next generation of their x86 CPU,  the Xeon E5-2600 v3.  The specific CPU models being offered vary by server vendor, so here’s a summary of what the new CPU will provide.

Summary of the Intel E5-2600 v3 CPU:

  1. Increase in CPU Cores – up to 18 cores with additional offerings of 16, 14, 12, 10, 8, 6 and 4 cores.
  2. Increase in Shared Cache – up to 45MB of Lower Level (LL) Cache
  3. Increase in QPI Speed – up to 9.6GT/s
  4. New DDR4 Memory – 4 x DDR4 channels supporting 32GB DIMMs (64GB in future); max of 2133 MHz

Intel Xeon E5-2600 v3 CPU Overview

 

 

 

 

 

 

 

 

 

 

 

 

For more details on the Intel Xeon E5-2600 v3, check out this great write up on TomsITPro.com

 

 

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

 

 

 

 

 

William LearaUSB 3.1 Developer Days, Berlin, Germany

original announcement:
http://www.usb.org/developers/events/USB31DevDaysBerlin/

The USB 3.1 Specification adds a SuperSpeed USB 10Gbps speed mode that uses a more efficient data encoding and will deliver more than twice the effective data through-put performance of existing SuperSpeed USB over enhanced, fully backward compatible USB connectors and cable. The specification extends the existing SuperSpeed mechanical, electrical, protocol and hub definition while maintaining compatibility with existing USB 3.0 software stacks and device class protocols as well as with existing 5Gbps hubs and devices and USB 2.0 products.

The USB Type-C Cable and Connector Specification defines a new USB connector solution that extends the existing set of cables and connectors to enable emerging platform designs where size, performance and user flexibility are increasingly more critical. The specification covers all of the mechanical and electrical requirements for the new connector and cables. Additionally, it covers the functional requirements that enable this new solution to be reversible both in plug orientation and cable direction, and to support functional extensions that designers are looking for in order to enable single-connector platform designs.

The USB Power Delivery Specification defines the use of a sideband communications method used between two connected USB products to discover, configure and manage power delivered across VBUS between USB products with control over power delivery direction, voltage (up to 20V) and current (up to 5A). The USB Power Delivery 2.0 update adds a new communications physical layer that is specific to the USB Type-C cable and connector solution. The specification also extends the definition of Structured Vendor Defined Messages (VDMs) to enable the functional extensions that are possible with the USB Type-C solution.

What:  USB 3.1 Developers Days is an opportunity to review these specifications and engage with experts in a face-to-face setting
When:  The conference will be held October 1-2, 2014
Cost:  Members US $475.00
         
Non-members US $960.00    
Registration will close on Monday, September 22 at 5:00PM US Pacific Time. All attendees MUST be pre-registered as on-site registration will not be available.

Agenda (subject to change):
Day 1:  USB 3.1 (featuring the new USB Type-C connector)

- Registration check-in
- Introduction
- USB 3.1 Architectural Overview
- USB 3.1 Physical and Link Layers
- USB Type-C Functional Requirements
- USB 3.1 Protocol Layer
- USB 3.1 Hub- USB 3.1 Compliance

Day 2: Track One
- USB Cables and Connectors (including USB Type-C)
     * Overview
     * USB Type-C Mechanical requirements and compliance
     * USB Type-C Electrical/EMC requirements and compliance
- USB 3.1 System Design
     * USB 3.1 design and interoperability goals, and design envelope (EQ capability, channel loss budget)
     * System simulation:  reference channels and reference equalizers
     * Key system performance metrics and design trade-offs
     * Design recommendations and trade-offs for package and PCB designs
     * Silicon design considerations, including equalizers and system margining
     * Re-timing repeater design requirements
     * Design to minimize EMI & RFI

Day 2 :  Track Two
- USB Power Delivery 2.0
     * Introduction and Architectural Overview
     * Electrical/Physical Layer
     * Protocol Layer
     * Protocol Extensions (specific to USB Type-C)
     * Device and System Policy
     * Power Supply
     * Compliance

Where:  Sofitel Berlin Kurfürstendamm
            Augsburger Strasse 4110789
            10789 - Berlin
            Germany
http://www.sofitel.com/gb/hotel-9387-sofitel-berlin-kurfurstendamm-/index.shtml

Tel.: (+49) 30 800 9990
Fax: (+49) 30 800 99999

Hotel Accommodations
The group room block is at the Sofitel Berlin Kurfürstendamm. To receive the group sleeping room rate of EUR 145 per night (single occupancy, includes tax, breakfast and guestroom internet) attendees should make their reservations by completing the Hotel Reservation Form and submitting it directly to the hotel via fax or email. A double occupancy rate of EUR 165 is also available. The reservation deadline is Monday, September 15, 2014. Reservations received after September 15th are subject to availability and room type and will be offered at the group rate based on availability only. 

A major credit card is needed to guarantee guestroom reservations. Any reservation cancellations should be made by September 25th to avoid cancellation penalties. If the room is cancelled after this date or is not checked in on the day of arrival, the hotel will charge 100% of the agreed room rate for the entire stay to the credit card on file.

Hotel check-in time is 3:00pm. Check-out time is 12:00pm.  Early check-in and late check-out are subject to availability. 

Hotel:  Sofitel Berlin Kurfürstendamm
Cut-Off Date:  Monday, September 15, 2014
Group Rate:  EUR 145 per night

Kevin HoustonA First Look at the Dell PowerEdge M630

The PowerEdge M630, Dell’s newest blade server based on the Intel Xeon E5-2600 v3 was announced today.  Although specifics haven’t been officially posted on Dell’s website, a video releasing some highlights of the newest member to the PowerEdge family was found on YouTube by Gartner Analyst,  @Daniel_Bowers, so here is a quick look at it.


M630 with 4 - 1.8 SSDThe PowerEdge M630 is a half-height blade server with up to 2 x Intel Xeon E5-2600 v3 CPUs (up to 36 cores), 24 DDR4 DIMMs, up to 4 x 10GbE CNA ports plus support for up to 2 additional  I/O mezzanine expansion cards (up to 8 x 10GbE total ports).  Best of all is the “4 drive configuration” as shown in the image to the left.  More details on that when it becomes available…

UPDATED: The newest addition to this blade server is the use of 1.8″ Solid State Drives (SSDs) offering high performance at an affordable price point. Dell has not published the available drive sizes, but as they become available, I’ll publish them here.

Check out the full video on the Dell PowerEdge M630 Blade Server here.

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 17 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Kevin HoustonA Look at the Cisco UCS M-Series

On September 4th, Cisco released a new line of modular servers under the UCS family known at the M-Series.  Interesting enough, though, Cisco’s not calling the new servers “blade servers” but instead they are taking a play out of HP’s Moonshot playbook and calling them “cartridges.”   The M-Series won’t be available until Q4 of this year, but in this blog post, I’ll highlight the information Cisco has provided.
Cisco is taking a very unique approach with the UCS M-Series.  Veering away from the tradition server model of each server having its own NIC and RAID controller, the servers in the M-Series are “disaggregated” and share a NIC and Storage.  Although this platform is ideal for nearly any single-threaded application, Cisco appears to be targeting the M-Series for “Cloud-Scale Applications.”

M4308 Chassis

M-Series-M4308-Chassis-Front-1024x500The chassis for the new M-Series is known as the M4308 and is a 2U form factor that holds 8 x 1/4 width server M142 server cartridges – more on these below.  As you can see in the image, the front of the chassis is not very complex.  On the left side is a series of LEDs that give basic information on the chassis such as if it has power, if there are any alerts and if there is network connectivity.  On the right side you’ll notice LEDs with the numbers 1 – 8 signifying the cartridges, most likely confirming they are connected and powered on.

Cisco M4308 chassis - rearThe rear of the chassis houses the 4 x SSD drive bays (choice of SAS or SATA drives with capacities ranging from 240 GB to 1.6 TB per disk) that are connected to a single 12G modular RAID controller with 2-GB flash-backed write cache (FBWC).  The chassis shared 2 x 1400 W power supplies and has 2 x 40GbE uplinks.  From what I can understand, these 40GbE links connect the single internal Virtual Interface Card that is shared across each of the server cartridges (which equates to 5GbE per server.)  In the image of the rear of the chassis on the left side is what appears to be a PCIe port that could be shared across the blades server cartridges, however nothing was mentioned in the blog or data sheets, so that slot’s use is unclear.  One thing they did mention in the Cisco blog is that the design of sharing RAID and NICs is performed through something called UCS System Link Technology – a silicon-based technology  that gives M-Series the ability to connect these disaggregated subsystems via a UCS System Link fabric and create a truly composable infrastructure. Based on details from the data sheet, the 40GbE uplinks will connect directly into the UCS 6200 Fabric Interconnect, and up to 20 M4308 chassis can be connected in a single domain.  Hopefully Cisco will reveal more about this technology as it gets closer to availability in Q4.

M142 Server Cartridge

Cisco M142 CartridgeThe Cisco UCS M-Series sesrvers are nothing like the UCS B-Series blade servers, which is perhaps why Cisco is calling them “cartridges”.  A single cartridge actually holds 2 servers each with 1 x Intel E3 CPU, and 4 x 8GB DDR3, 1600MHz DIMMs.  The Intel E3 CPU speeds being offered are:

  • Intel® Xeon® processor E3-1275L v3 (8-MB cache, 2.7 GHz), 4 cores, and 45W
  • Intel® Xeon® processor E3-1240L v3 (8-MB cache, 2.0 GHz), 4 cores, and 25W
  • Intel® Xeon® processor E3-1220L v3 (4-MB cache, 1.1 GHz), 2 cores, and 13W

A quick observation – if you multiply 45W x 16 compute nodes, you come out to 720W.  As mentioned above, the chassis has 2 x 1400W redundant power supplies, so this leaves 700+W for the VIC and RAID – or is this a preview into what Cisco’s next cartridge might require?

For more information on the Cisco UCS M-Series, visit Cisco’s website.

 

 

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

 

 

 

 

Hollis Tibbetts (Ulitzer)CIO Shocker: #Cloud and SaaS Surprise Awaits

The advancement of technology has led to widespread Cloud data and SaaS application usage throughout enterprises. And CIOs are unprepared for the (mostly unwelcome) implications - largely unaware of the "SaaS Sprawl" in their organizations. These Cloud applications are available for just about every role in a company - from human resources to marketing, there's an app for that. And odds are, someone in your organization is using it - most likely without IT knowing. As app (primarily SaaS and Cloud) use within organizations continues to spread and accelerate, IT professionals are largely unaware of the massive scale of Cloud application utilization. However, IT continues to be held responsible for many of the implications resulting from this widespread proliferation.

read more

Mark CathcartRail isn’t about Congestion

It's not going to fix congestion.

It’s not going to fix congestion.

Prop.1 on the Austin November ballot is an attempt to fund the largest single bond in Austin history, almost half the $1 billion going to the light rail proposal.

Finally people seem to be getting the fact that the light rail, if funded, won’t help with the existing traffic. KUT had a good review of this yesterday, the comments also some useful links. You can listen to the segment here: Is a Light Rail Line Going to Solve Austin’s Traffic Problems?

Jace Deloney, makes some good points, what no one is saying though, and what I believe is the real reason behind the current proposal. There is a real opportunity to develop a corridor of key central Austin and, some unused and many underused land, West of I35, and from Airport all the down to Riverside Dr.

This is hugely valuable land, but encouraging development would be a massive risk, purely because of existing congestion. Getting more people to/from buildings in that corridor, by car, or even bus, into more dense residential accommodation, a medical school, UT Expansion or re-site, more office, whatever, will be untenable in terms of both west/east and south/north congestion. So the only way this could really work, is to make a rail corridor, with stations adjacent the buildings.

The Guadalupe/Lamar route favored by myself and other rail advocates wouldn’t add almost any value to that new corridor. It’s debatable that it would eliminate congestion on the west side of town either. But with a rail transit priority system, the new toll lanes on Mopac, the ability to get around at peak times, and the elimination of a significant number of cars in the central west, and downtown areas would make it worth the investment.

Voters need to remember this when considering which way to vote in November. If the city, UT, and developers want to develop that corridor, they should find some way of funding rail from those that will directly benefit. City wide economic impact; new tax revenues, new jobs is a slight of hand, a misdirection.

It’s not acceptable to load the cost onto existing residents for little benefit, just so you can developers can have their way.


William LearaFall 2014 UEFI Plugfest

The UEFI Testing Work Group (UTWG) and the UEFI Industry Communications Work Group (ICWG) from the Unified EFI (UEFI) Forum invite you to the upcoming UEFI Plugfest being held October 13-17, 2014 in Taipei, Taiwan.clip_image001

If you require formal invitation documents for Visa application/traveling purposes, please contact Tina Hsiao for more information.

UEFI membership is required to attend UEFI Testing Events & Workshops. If you are not yet a UEFI member, please visit UEFI.org/join to learn about obtaining UEFI membership.

Please stay tuned for updates regarding the Fall 2014 UEFI Plugfest. Registration and other logistical information will be provided very soon.

 

Event Contact

Tina Hsiao, Insyde Software

Phone: (02) 6608-3688 Ex: 1599

Email: uefi.plugfest@insyde.com

Rob HirschfeldVMware Integrated OpenStack (VIO) is smart move, it’s like using a Volvo to tow your ski boat

I’m impressed with VMware’s VIO (beta) play and believe it will have a meaningful positive impact in the OpenStack ecosystem.  In the short-term, it paradoxically both helps enterprises stay on VMware and accelerates adoption of OpenStack.  The long term benefit to VMware is less clear.

From VWVortex

Sure, you can use a Volvo to tow a boat

Why do I think it’s good tactics?  Let’s explore an analogy….

My kids think owning a boat will be super fun with images of ski parties and lazy days drifting at anchor with PG13 umbrella drinks; however, I’ve got concerns about maintenance, cost and how much we’d really use it.  The problem is not the boat: it’s all of the stuff that goes along with ownership.  In addition to the boat, I’d need a trailer, a new car to pull the boat and driveway upgrades for parking.  Looking at that, the boat’s the easiest part of the story.

The smart move for me is to rent a boat and trailer for a few months to test my kids interest.  In that case, I’m going to be towing the boat using my Volvo instead of going “all in” and buying that new Ferd 15000 (you know you want it).  As a compromise, I’ll install a hitch in my trusty sedan and use it gently to tow the boat.  It’s not ideal and causes extra wear to the transmission but it’s a very low risk way to explore the boat owning life style.

Enterprise IT already has the Volvo (VMware vCenter) and likely sees calls for OpenStack as the illusion of cool ski parties without regard for the realities of owning the boat.  Pulling the boat for a while (using OpenStack on VMware) makes a lot of sense to these users.  If the boat gets used then they will buy the truck and accessories (move off VMware).  Until then, their still learning about the open source boating life style.

Putting open source concerns aside.  This helps VMware lead the OpenStack play for enterprises but may ultimately backfire if they have not setup their long game to keep the customers.


William LearaMy Favorite Obituary

image
Okay, I know it’s a bizarre title, but bear with me.  Mr. Tom Halfhill, a computer journalist I grew up reading in COMPUTE! magazine, wrote the following “obituary” upon the death of Commodore.  If you’re like me and grew up with a Commodore 64 computer, I think you will find it a poignant tribute.  (have tissues nearby…)

Beautifully written, thoughtful and accurate, this “obituary” best tells the story of Commodore and expresses the spirit of the early personal computer era.

R.I.P. Commodore 1954-1994

A look at an innovative computer industry pioneer, whose achievements have been largely forgotten

Tom R. Halfhill

Obituaries customarily focus on the deceased’s accomplishments, not the unpleasant details of the demise. That’s especially true when the demise hints strongly of self-neglect tantamount to suicide, and nobody can find a note that offers some final explanation.

There will be no such note from Commodore, and it would take a book to explain why this once-great computer company lies cold on its deathbed. But Commodore deserves a eulogy, because its role as an industry pioneer has been largely forgotten or ignored by revisionist historians who claim that everything started with Apple or IBM. Commodore’s passing also recalls an era when conformity to standards wasn’t the yardstick by which all innovation was measured.

In the 1970s and early 1980s, when Commodore peaked as a billion-dollar company, the young computer industry wasn’t dominated by standards that dictated design parameters. Engineers had much more latitude to explore new directions. Users tended to be hobbyists who prized the latest technology over backward compatibility. As a result, the market tolerated a wild proliferation of computers based on many different processors, architectures, and operating systems.

Commodore was at the forefront of this revolution. In 1977, the first three consumer-ready personal computers appeared: the Apple II, the Tandy TRS-80, and the Commodore PET (Personal Electronic Transactor). Chuck Peddle, who designed the PET, isn’t as famous as Steve Wozniak and Steve Jobs, the founders of Apple. But his distinctive computer with a built-in monitor, tape drive, and trapezoidal case was a bargain at $795. It established Commodore as a major player.

The soul of Commodore was Jack Tramiel, an Auschwitz survivor who founded the company as a typewriter-repair service in 1954. Tramiel was an aggressive businessman who did not shy away from price wars with unwary competitors. His slogan was “computers for the masses, not the classes.”

In what may be Commodore’s most lasting legacy, Tramiel drove his engineers to make computers that anyone could afford. This was years before PC clones arrived. More than anyone else, Tramiel is responsible for our expectation that computer technology should keep getting cheaper and better. While shortsighted critics kept asking what these machines were good for, Commodore introduced millions of people to personal computing. Today, I keep running into those earliest adopters at leading technology companies.

Commodore’s VIC-20, introduced in 1981, was the first color computer that cost under $300. VIC-20 production hit 9000 units per day—a run rate that’s enviable now, and was phenomenal back then. Next came the Commodore 64 (1982), almost certainly the best-selling computer model of all time. Ex-Commodorian Andy Finkel estimates that sales totaled between 17 and 22 million units. That’s more than all the Macs put together, and it dwarfs IBM’s top-selling systems, the PC and the AT.

Commodore made significant technological contributions as well. The 64 was the first computer with a synthesizer chip (the Sound Interface Device, designed by Bob Yannes). The SX-64 (1983) was the first color portable, and the Plus/4 (1984) had integrated software in ROM.

But Commodore’s high point was the Amiga 1000 (1985). The Amiga was so far ahead of its time that almost nobody—including Commodore’s marketing department—could fully articulate what it was all about. Today, it’s obvious the Amiga was the first multimedia computer, but in those days it was derided as a game machine because few people grasped the importance of advanced graphics, sound, and video. Nine years later, vendors are still struggling to make systems that work like 1985 Amigas.

At a time when PC users thought 16-color EGA was hot stuff, the Amiga could display 4096 colors and had custom chips for accelerated video. It had built-in video outputs for TVs and VCRs, still a pricey option on most of today’s systems. It had four-voice, sampled stereo sound and was the first computer with built-in speech synthesis and text-to-speech conversion. And it’s still the only system that can display multiple screens at different resolutions on a single monitor.

Even more amazing was the Amiga's operating system, which was designed by Carl Sassenrath. From the outset, it had preemptive multitasking, messaging, scripting, a GUI, and multitasking command-line consoles. Today’s Windows and Mac users are still waiting for some of those features. On top of that, it ran on a $1200 machine with only 256 KB of RAM.

We may never see another breakthrough computer like the Amiga. I value my software investment as much as anyone, but I realize it comes at a price. Technology that breaks clean with the past is increasingly rare, and rogue companies like Commodore that thrived in the frontier days just don’t seem to fit anymore.

My Thoughts

But Commodore deserves a eulogy, because its role as an industry pioneer has been largely forgotten or ignored by revisionist historians who claim that everything started with Apple or IBM.
This is so true.  Especially with the return of Steve Jobs to Apple and that company’s resurgence, people have the following idea of computer history:  Apple invented the personal computer, then IBM and Microsoft unfairly took it over.  That’s ridiculous—in fact, the Commodore PET was launched before the Apple ][.  The TRS-80 was the early PC market leader by virtue of Radio Shack having a nation-wide distribution system in place.  Commodore took over market leadership with the introduction of the VIC-20.  It wasn’t until VisiCalc was released on the Apple ][ (by dumb luck) that Apple caught a break and became a significant company.
The 64 was the first computer with a synthesizer chip (the Sound Interface Device, designed by Bob Yannes). The SX-64 (1983) was the first color portable, and the Plus/4 (1984) had integrated software in ROM.
This reminds me of a comment Steve Wozniak made at the 25th Anniversary of the Commodore 64, a celebration hosted by the Computer History Museum.  He criticized the C64 as not being expandable.  First of all, that’s just plain wrong.  The C64 was just as expandable as an Apple ][, it just used serial, parallel, cassette, and an external expansion port to do it, rather than the internal expansion slot approach used by Apple and others.  But anyway, my main point is that the C64 didn’t have to be so expandable, since, unlike the Apple ][, so much was already built in!  Like the SID sound chip—Apple ][ owners had to buy a separate expansion card; C64 owners had four voice sound for free.  The basic Apple ][e was monochrome—the C64 gave you color for free.

In Closing

Ironically, when Mr. Halfhill says “…and it would take a book to explain why this once-great computer company lies cold on its deathbed”, someone did, and I highly recommend the book!:
image
Long live the Commodore 64!

William LearaDMTF Webinars Now Available On-Demand

The Desktop Management Task Force (DMTF) produces standards of great interest to BIOS developers.  (e.g., SMBIOS) Did you know that DMTF webinars are now available online for on-demand viewing?

There are currently 20+ talks mainly covering virtualization, storage, cloud computing, and the management of these technologies.  See:

http://www.dmtf.org/education/webinars

Note:  Viewing requires the user to register with BrightTALK.  It’s quick and painless and does not cost anything.

Rob HirschfeldOpenStack DefCore Process Flow: Community Feedback Cycles for Core [6 points + chart]

If you’ve been following my DefCore posts, then you already know that DefCore is an OpenStack Foundation Board managed process “that sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack™ products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled OpenStack™.”

In this post, I’m going to be very specific about what we think “community resources and involvement” entails.

The draft process flow chart was provided to the Board at our OSCON meeting without additional review.  It below boils down to a few key points:

  1. We are using the documents in the Gerrit review process to ensure that we work within the community processes.
  2. Going forward, we want to rely on the technical leadership to create, cluster and describe capabilities.  DefCore bootstrapped this process for Havana.  Further, Capabilities are defined by tests in Tempest so test coverage gaps (like Keystone v2) translate into Core gaps.
  3. We are investing in data driven and community involved feedback (via Refstack) to engage the largest possible base for core decisions.
  4. There is a “safety valve” for vendors to deal with test scenarios that are difficult to recreate in the field.
  5. The Board is responsible for approving the final artifacts based on the recommendations.  By having a transparent process, community input is expected in advance of that approval.
  6. The process is time sensitive.  There’s a need for the Board to produce Core definition in a timely way after each release and then feed that into the next one.  Ideally, the definitions will be approved at the Board meeting immediately following the release.

DefCore Process Draft

Process shows how the key components: designated sections and capabilities start from the previous release’s version and the DefCore committee manages the update process.  Community input is a vital part of the cycle.  This is especially true for identifying actual use of the capabilities through the Refstack data collection site.

  • Blue is for Board activities
  • Yellow is or user/vendor community activities
  • Green is for technical community activities
  • White is for process artifacts

This process is very much in draft form and any input or discussion is welcome!  I expect DefCore to take up formal review of the process in October.


Rob HirschfeldCloud Culture: No spacesuits, Authority comes from doing, not altitude [Collaborative Series 4/8]

Subtitle: Why flattening org charts boosts your credibility

This post is #4 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Unlike other generations, Digital Natives believe that expertise comes directly from doing, not from position or education. This is not hubris; it’s a reflection both their computer experience and dramatic improvements in technology usability.

AstronautIf you follow Joel Spolsky’s blog, “Joel on Software,” you know about a term he uses when describing information architects obsessed with the abstract and not the details; Architecture Astronauts—so high up above the problem that they might as well be in space. “They’re astronauts because they are above the oxygen level, I don’t know how they’re breathing.”

For example, a Digital Native is much better positioned to fly a military attack drone than a Digital Immigrant. According to New Scientist, March 27, 2008, the military is using game controllers for drones and robots because they are “far more intuitive.” Beyond the fact that the interfaces are intuitive to them, Digital Natives have likely logged hundreds of hours flying simulated jets under trying battle conditions. Finally, they rightly expect that they can access all the operational parameters and technical notes about the plane with a Google search.

Our new workforce is ready to perform like none other in history.

Being able to perform is just the tip of the iceberg; having the right information is the more critical asset. A Digital Native knows information (and technology) is very fast moving and fluid. It also comes from all directions … after all it’s The Information Age. This is a radical paradigm shift. Harvard Researcher David Weinberger highlights in his book Too Big to Know that people are not looking up difficult technical problems in a book or even relying on their own experiences; they query their social networks and discover multiple valid solutions. The diversity of their sources is important to them, and an established hierarchy limits their visibility; inversely, they see leaders who build strict organizational hierarchies as cutting off their access to information and diversity.

Today’s thought worker is on the front lines of the technological revolution. They see all the newness, data, and interaction with a peer-to-peer network. Remember all that code on the screen in the movie The Matrix? You get the picture.

To a Digital Native, the vice presidents of most organizations are business astronauts floating too high above the world to see what’s really going on but feeling like they have perfect clarity. Who really knows the truth? Mission Control or Major Tom? This is especially true with the acceleration of business that we are experiencing. While the Astronaut in Chief is busy ordering the VPs to move the mountains out of the way, the engineers at ground control have already collaborated on a solution to leverage an existing coal mine and sell coal as a byproduct.

The business hierarchy of yesterday worked for a specific reason: workers needed to just follow rules, keep their mouth shut, and obey. Input, no matter how small, was seen as intrusive and insubordinate … and could get one fired. Henry Ford wanted an obedient worker to mass manufacture goods. The digital age requires a smarter worker because, in today’s world, we make very sophisticated stuff that does not conform to simple rules. Responsibility, troubleshooting, and decision-making has moved to the frontlines. This requires open-source style communication.

Do not confuse the Astronaut problem as a lack of respect for authority.

Digital Natives respect informational authority, not positional. For Digital Natives, authority is flexible. They have experience forming and dissolving teams to accomplish a mission. The mission leader is the one with the right knowledge and skills for the situation, not the most senior or highest scoring. In Liquid Leadership, Brad explains that Digital Natives are not expecting managers to solve team problems; they are looking to their leadership to help build, manage, and empower their teams to do it themselves.

So why not encourage more collaboration with a singular mission in mind: develop a better end product? In a world that is expanding at such mercurial speed, a great idea can come from anywhere! Even from a customer! So why not remember to include customers in the process?

Who is Leroy Jenkins?

This viral video is about a spectacular team failure from one individual (Leroy Jenkins) who goes rogue during a team massively multi-player game.  This is a Digital Natives’ version of the ant and grasshopper parable: “Don’t pull a Leroy Jenkins on us—we need to plan this out.”  Youtu.be/LkCNJRfSZBU

Think about it like this: Working as a team is like joining a quest.

If comparing work to a game scenario sounds counterintuitive then let’s reframe the situation. We may have the same destination and goals, but we are from very different backgrounds. Some of us speak different languages, have different needs and wants. Some went to MIT, some to community college. Some came through Internet startups, others through competitors. Big, little, educated, and smart. Intense and humble. Outgoing and introverted.  Diversity of perspective creates stronger teams.

This also means that leadership roles rotate according to each mission.

This is the culture of the gaming universe. Missions and quests are equivalent to workplace tasks accomplished and point to benchmarks achieved. Each member excepts to earn a place through tasks and points. This is where Digital Natives’ experience becomes advantage. They expect to advance in experience and skills. When you adapt the workplace to these expectations the Digital Natives thrive.

Leaders need to come down to earth and remove the spacesuit.

A leader at the top needs to stay connected to that information and disruption. Start by removing your helmet. Breathe the same oxygen as the rest of us and give us solutions that can be used here on planet earth.

On Gamification

Jeff Attwood, founder of the community-based FAQ site Stack Overflow, has been very articulate about using game design to influence how he builds communities around sharing knowledge. We recommend reading his post about “Building Social Software for the Anti-Social” on his blog, CodingHorror.com.


Ryan M. Garcia Social Media LawIceholes: How The ALSA May Win The Battle But Lose The War

You know what we do to bad ice on a pedestal?

The biggest surprise hit of the summer is not Guardians of the Galaxy but rather the megaviral smash Ice Bucket Challenge benefiting the ALS Association. Rather than be thankful for this windfall the ALSA has recently decided that they should own this challenge and prevent any other cause or organization from using it. What do you think they are, a charity?

Oh yeah, they are.  Then maybe they should start acting like it and not a bunch of selfish iceholes.

First, some background. The ALSA did not create the ice bucket challenge. The gimmick has been around for a long time. In fact, when this latest round started over the summer it began as a challenge to dump a bucket of ice water on your head or donate $100 to a charity of your choice.  It was only when the challenge first passed to professional golfer Chris Kennedy that the donation was flagged for the ALSA and the individuals he tagged kept the charity when they made their videos.  Later, there was a significant wave of ice bucket activity in Boston due to native ALS sufferer Pete Frates and concerted actions by the Red Sox organization.  Facebook’s data team’s analysis shows that Boston does appear to be the epicenter of the challenge going truly viral.

Nobody is exactly sure why the challenge has reached its current level of popularity, but that’s true for most viral hits in the social media age.  Sure, the videos are funny. And having one person tag several others to participate makes for an exponential reach. And having the challenge somehow associated with charity so we all think we can have fun while helping out a worthy cause makes it seem nice too. There are even a scattering of super serious videos in the mix depicting a bit of what the disease means to its victims and their families. We can identify all the elements but we still don’t know what made this challenge go viral like it did.  Heck, even I did one.  Although I’m not linking it after the reasons behind this post.

That doesn’t really matter though. It doesn’t matter that we can’t explain why it went viral; it went viral. It doesn’t matter that perhaps the amount of money we give to charities is out of proportion to the impact of the disease as IFLScience linked in a Vox article infographic; there is no doubt this is a horrific disease and increased attention to it is a good thing. It doesn’t matter that ALSA only spends a small percentage of its budget on research; it performs several other valuable services and all charities have to spend a lot of money to ultimately make more money in the end.

Here’s what does matter: the ALSA was given the greatest gift of their life in terms of this ice bucket challenge.  Donations are through the roof.  Yesterday they reported making over $94.3 million in donations in just the last month.  Last year, in the same time period, they received around $2.7 million.  Rather than just say thanks or give the tearful Sally Field “You like me, you really like me!” Oscar acceptance speech they decided to go another direction. They decided to take that warm fuzzy feeling we’ve had from watching or making these videos and donating to a worthy cause and pour a giant bucket of ice water on our flames of altruism.

As first reported on the Erik M Pelton & Associates blog, the ALSA filed an application with the US Patent and Trademark Office to be granted a trademark for the term ICE BUCKET CHALLENGE as used for any charitable fundraising.  They also filed an application for ALS ICE BUCKET CHALLENGE but it’s the main application that should make people furious.  Heck, it made me enough to write a blog post on a Thursday night and I never do that.

Filing a trademark for the term “Ice Bucket Challenge” would allow them to prevent any other charity from promoting a campaign that the ALSA had fall into their lap.  The ALSA did not create this concept.  They did not market this campaign until it already went viral.  They have no responsibility whatsoever for this going viral.  If the ice bucket challenge had found a connection to the American Heart Association or the American Cancer Society then it could have gone just as viral.

What on earth could make the ALSA think they should have any right whatsoever to prevent someone else from using this challenge?

I can’t think of a good reason.  I can think of reasons, mind you.  They just aren’t good.  Fortune was able to get a statement from ALSA spokesperson Carrie Munk:

The ALS Association took steps to trademark Ice Bucket Challenge after securing the blessings of the families who initiated the challenge this summer. We did this as a good faith effort after hearing that for-profit businesses were creating confusion by marketing ALS products in order to capitalize on this grassroots charitable effort.

Sorry, ALSA, but that excuse doesn’t hold water.

First, obtaining the blessings of the families who created this challenge is nonsense.  Even if you got permission from everyone who ever did an ice bucket challenge–SO WHAT?  This was a charity drive.  You think the first charity to earn a million dollars from a bake sale should get to stop all other bake sales?  Because that’s what filing a trademark on the challenge is an attempt to do–you’re trying to stop any other charity from using the term for fundraising.

Second, you heard some shady companies were making money off the Ice Bucket Challenge?  Wow, that must be weird.  To think there are these companies just sitting around making money off something they didn’t create.  JUST LIKE YOU.  Who cares if someone makes an Ice Bucket Challenge shirt and sells it?  If it says ALSA on it or has your logo you can already go after them without this new trademark application.

The ALSA’s actions are atrocious and reprehensible.  They may have raised a ton of money this summer but it could all backfire over a move like this.

But here, ALSA, I’m going to be nicer than you appear to be.  Here’s a way for you to cover your cold, soaked behinds and spin this in a favorable way.  What you should have done is post on your website the day you filed the application, saying that you are only doing so to protect all charities from shady profiteers but that all charities would be free to use the mark forever for no charge if you received the trademark.  The fact that you didn’t tell anyone about the application and only commented when it was called out on social media (by the way, you’ve heard about this social media thing and how a lot of people use it, right?) you can just blame on being so busy counting all your money.  It’s a bad excuse, but maybe it can save some face.

Because right now you look like a bunch of iceholes and I resent every penny I gave you.  Not for the good work you’ve done, which is a lot, or the families you’ve helped, which are numerous, but for being greedy instead of generous, selfish instead of, you know, charitable.

Update Aug 29: The ALSA has withdrawn their trademark application. Good.


Jason BocheVMworld 2014 U.S. Top Ten Sessions

Following is the tabulated listing of VMworld 2014 U.S. top ten session as of noon PST 8/28/14. If you plan on catching up on recorded sessions later, this top ten list should be highly considered. Nice job goes out to all of the presenters in this list as well as all presenters at VMworld.

Tuesday – STO1965.1 – Virtual Volumes Technical Deep Dive
Rawlinson Rivera, VMware
Suzy Visvanathan, VMware

Tuesday – NET1674 – Advanced Topics & Future Directions in Network Virtualization with NSX
Bruce Davie, VMware

Tuesday – BCO1916.1 – Site Recovery Manager and Stretched Storage: Tech Preview of a New Approach to Active-Active Data Centers
Shobhan Lakkapragada, VMware
Aleksey Pershin, VMware

Tuesday – INF1522 – vSphere With Operations Management: Monitoring the Health, Performance and Efficiency of vSphere with vCenter Operations Manager
Kyle Gleed, VMware
Ryan Johnson, VMware

Tuesday – SDDC3327 – The Software-defined Datacenter, VMs, and Containers: A “Better Together” Story
Kit Colbert, VMware

Tuesday – SDDC1600 – Art of IT Infrastructure Design: The Way of the VCDX – Panel
Mark Gabryjelski, Worldcom Exchange, Inc.
Mostafa Khalil, VMware
chris mccain, VMware
Michael Webster, Nutanix, Inc.

Tuesday – VAPP1318.1 – Virtualizing Databases Doing IT Right – The Sequel
Michael Corey, Ntirety – A Division of Hosting
Jeff Szastak, VMware

Tuesday – SEC1959-S – The “Goldilocks Zone” for Security
Martin Casado, VMware
Tom Corn, VMware

Monday – HBC1533.1 – How to Build a Hybrid Cloud – Steps to Extend Your Datacenter
Chris Colotti, VMware
David Hill, VMware

Monday – INF1503 – Virtualization 101
Michael Adams, VMware

Post from: boche.net - VMware Virtualization Evangelist

Copyright (c) 2010 Jason Boche. The contents of this post may not be reproduced or republished on another web page or web site without prior written permission.

VMworld 2014 U.S. Top Ten Sessions

Kevin HoustonIDC Worldwide Server Tracker – Q2 2014 Released

The Q2 2014  IDC Worldwide Server Tracker was released on August 26, 2014 and it reported that the demand for x86 servers improved in 2Q14 with revenues increasing 7.8% year over year in the quarter to $9.8 billion worldwide as unit shipments increased 1.5% to 2.2 million servers. HP led the market with 29.6% revenue share based on 7.4% revenue growth over 2Q13. Dell retained second place, securing 21.2% revenue share.

IDC_2Q2014_WWServerTracker

Modular servers – blades and density-optimized – represent distinct segments of growth for vendors in an otherwise mature market,” said Jed Scaramella, Research Director, Enterprise Servers and Datacenter at IDC. “As the building block for integrated systems, blade servers will continue to drive enterprise customers along the evolutionary path toward private clouds. On the opposite side of the spectrum, density-optimized servers are being rapidly adopted by hyperscale datacenters that favor the scalability and efficiency of the form factor.”

If you want to read the entire press release, please visit http://www.idc.com/getdoc.jsp?containerId=prUS25060614

 

Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

William LearaQuick-Start Guide to UDK2014

Getting the UEFI Development Kit (UDK) installed and building is the first step in attempting to work in BIOS development.  Here is my experience getting the latest version of the UDK, UDK 2014, to work in Windows.

Step 1Download UDK 2014 (101MB)

Step 2:  The main .ZIP is a collection of .ZIPs.  First, extract UDK2014.MyWorkSpace.zip.

Step 3:  This is tricky:  you next have to unzip BaseTools(Windows).zip, and it has to be put in a subdirectory of the MyWorkSpace directory from Step 2.  The “BaseTools” directory should be at a peer level to Build, Conf, CryptoPkg, etc.  Note that this will entail overwriting several files, e.g., EDKSETUP.BAT—this is okay.  The final directory structure should look like:

    MyWorkSpace

        -->BaseTools

        -->Build

        -->Conf

        etc.

Step 4:  Open a Command Prompt and cd to MyWorkSpace\.  Type the command

edksetup --NT32

to initialize the build environment.

Step 5:  Build the virtual BIOS environment:

> build -t VS2008x86 for Visual Studio 2008

> build -t VS2010x86 for Visual Studio 2010

Step 6:  Launch SECMAIN.EXE from the directory:

Build\NT32IA32\DEBUG_VS2010x86\IA32

imageA virtual machine will start and you will boot to an EFI shell.  Type “help” for a list of commands—see Harnessing the UEFI Shell (below) for more information re: the UEFI shell.  Congratulations, at this point you are ready to develop PEI modules and DXE drivers!

That is the absolute minimum work necessary to boot to the NT32 virtual machine.  There is additional information in the file UDK2014-ReleaseNotes-MyWorkSpace.txt, which is included in MyWorkSpace\.

 










Jason BocheVMware vCenter Site Recovery Manager 5.8 First Look

VMware vCenter Site Recovery Manager made it’s debut this week at VMworld 2014 in San Francisco.  Over the past few weeks I’ve had my hands on a release candidate version and I’ve put together a short series of videos highlighting what’s new and also providing a first look at SRM management through the new web client plug-in.  I hope you enjoy.

I’ll be at VMworld through the end of the week.  Stop and say Hi – I’d love to meet you.

 

VMware vCenter Site Recovery Manager 5.8 Part 1

VMware vCenter Site Recovery Manager 5.8 Part 2

VMware vCenter Site Recovery Manager 5.8 Part 3

Post from: boche.net - VMware Virtualization Evangelist

Copyright (c) 2010 Jason Boche. The contents of this post may not be reproduced or republished on another web page or web site without prior written permission.

VMware vCenter Site Recovery Manager 5.8 First Look

William LearaUncrustify Your BIOS

One of my favorite newsletters is Jack Ganssle’s The Embedded Muse.  In a recent issue, Jack discussed helpful tools for embedded systems development, and the tool Uncrustify came up.  I decided to run the tool on the UDK 2014 source, and this post discusses the results.

Uncrustify is an open-source code beautifier, comparable to other popular alternatives such as GNU Indent or Artistic Style.  Code beautifiers (a.k.a. pretty-printers) make code easier to read. They automatically update source code to use one consistent style throughout.  The user creates a configuration file that contains specifications for the types of code changes to make:  tab/space settings, newline options, brace styles, etc.  After feeding the configuration file and target source code into the beautifier tool, the tool modifies the source code according to the user’s specified configuration.  After I dug further into Uncrustify, however, I discovered the real star of the show—Universal Indent GUI!

By themselves, the various code beautifiers like Uncrustify are cumbersome to use.  Much time is spent examining all the various configuration options (which number in the hundreds) and manually editing terse configuration files—a tedious affair.  Thankfully, graphical front-ends exist for these tools, and Universal Indent GUI is best-in-class.

Here are four great features of Universal Indent GUI:

 

1.  Universal Indent GUI contains all the various code beautifier applications.

NonameNo need to download and install Uncrustify, GNU Indent, or any of the others.  Just select your desired code beautifier application and Universal Indent GUI will update its interface to display those options pertinent to the selected beautifier.  There are twenty-four different code beautifier applications supported by Universal Indent GUI!

 

2.  An elegant help system

NonameThe popular code beautifier applications offer literally hundreds of options.  Having to read through PDFs or on-line HTML pages in order to absorb all the many configuration settings is extremely tedious.  The genius of Universal Indent GUI is that a user can hover over an option and trigger a yellow popup containing an explanation of each particular configuration option.  The user can change those options important to him and ignore the rest.  Simple and intuitive!

 

3.  Live Indent Preview

NonameEven with the nice help system, nothing beats actually viewing the source code with the various options applied so you can make sure you are getting exactly what you think you’re getting.  Universal Indent GUI allows you to open a source code file, turn on the Live Indent Preview feature, and see your source code respond to configuration changes in real time.

 

4.  Universal Indent GUI outputs configuration files and batch files

NonameOnce you’ve selected the options important to you and configured them, a couple clicks will allow you to either a) save a configuration file ready for your code beautifier application; and/or b) create a batch file/shell script that will automatically apply your new configuration file to a source code directory tree.  These files can then be shared among all the various members of your development team to ensure consistent style.  Moreover, a source code repository pre-commit hook could be established to enforce a standard programming style.

 

Universal Indent GUI:  Summary

Universal Indent GUI has several other convenient configuration options which are simple and do not get in your way.  The application is available for both Windows and Linux.  There is no special installation required—simply unzip and execute.  I was very impressed with this tool, and highly recommend it to anyone who considers programming style an important characteristic of well-crafted software.  Tip:  use the Uncrustify config.txt file in order to browse what Uncrustify options are available within Universal Indent GUI.

 

UEFI BIOS Coding Standards

Intel has created a coding standards guide for EDK II.  Below are the parts of the coding standards that could possibly be enforced by a code beautifier application, along with the Uncrustify options I selected in Universal Indent GUI in order to make the UDK 2014 source code Intel-coding-standards-compliant:  (correct, the Intel UDK is not compliant with the Intel coding standards…)

  • Limit line length to 80 characters

image

  • 2 spaces of indentation

image

  • Never use tab characters.
    • Set editor to insert spaces rather than a tab character.

image

  • if, for, while, etc. always use { }, even when there is only one statement

image

    • The opening brace ({) should always appear at the end of the line previous line.

image

  • The opening brace ({) for a function should always appear separately on a new line.

image

Using Universal Indent GUI, I created the following batch and configuration files for Uncrustify to operate on the UDK source:

https://github.com/WilliamLeara/Uncrustify

Running it on the UDK 2014 code base (file UDK2014.MyWorkSpace.zip) took about one minute on my 3GHz 8-core Windows 8 system.  Sample:

image

The job made many changes, mostly around enforcing the 80 column limit which the UDK source does not adhere to.  I also noticed that trailing spaces were removed from lines.  I think it would be a lot of fun to play with all the various Uncrustify options and use the tool to automate work.

Do you use a code beautifier application in your organization?  Are they helpful, or a hindrance?  Leave a comment!  What are your experiences with these tools:  positive or negative?  Which one of the many code beautifier applications have you tried?  Leave a comment!

Ravikanth ChagantiSession Slides: Community Day 2014 – Introduction to Microsoft Azure Compute

Microsoft Azure offers several services each categorized into one of the four major categories – Compute, Data, App, and Network Services. This session takes you through an overview of the Microsoft Azure Compute Services. Introduction to Microsoft Azure Compute from Ravikanth Chaganti  

Rob HirschfeldCloud Culture: Reality has become a video game [Collaborative Series 3/8]

This post is #3 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

DO VIDEO GAMES REALLY MATTER THAT MUCH TO DIGITAL NATIVES?

Yes. Video games are the formative computer user experience (a.k.a. UX) for nearly everyone born since 1977. Genealogists call these people Gen X, Gen Y, or Millennials, but we use the more general term “Digital Natives” because they were born into a world surrounded by interactive digital technology starting from their toys and learning devices.

Malcolm Gladwell explains, in his book Outliers, that it takes 10,000 hours of practice to develop a core skill. In this case, video games have trained all generations since 1977 in a whole new way of thinking. It’s not worth debating if this is a common and ubiquitous experience; instead, we’re going to discuss the impact of this cultural tsunami.

Before we dive into impacts, it is critical for you to suspend your attitude about video games as a frivolous diversion. Brad explores this topic in Liquid Leadership, and Jane McGonnagle, in Reality is Broken, spends significant time exploring the incredibly valuable real world skills that Digital Natives hone playing games. When they are “gaming,” they are doing things that adults would classify as serious work:

  • Designing buildings and creating machines that work within their environment
  • Hosting communities and enforcing discipline within the group
  • Recruiting talent to collaborate on shared projects
  • Writing programs that improve their productivity
  • Solving challenging mental and physical problems under demanding time pressures
  • Learning to persevere through multiple trials and iterative learning
  • Memorizing complex sequences, facts, resource constraints, and situational rules.

Why focus on video gamers?

Because this series is about doing business with Digital Natives and video games are a core developmental experience.

The impact of Cloud Culture on technology has profound implications and is fertile ground for future collaboration between Rob and Brad.  However, we both felt that the challenge of selling to gamers crystallized the culture clash in a very practical and financially meaningful sense.  Culture can be a “soft” topic, but we’re putting a hard edge on it by bringing it home to business impacts.

Digital Natives play on a global scale and interact with each other in ways that Digital Immigrants cannot imagine. Brad tells it best with this story about his nephew:

Years ago, in a hurry to leave the house, we called out to our video game playing nephew to join us for dinner.

“Sebastian, we’re ready.” I was trying to be as gentle as possible without sounding Draconian. That was the parenting methods of my father’s generation. Structure. Discipline. Hierarchy. Fear. Instead, I wanted to be the Cool Uncle.

“I can’t,” he exclaimed as wooden drum sticks pounded out their high-pitched rhythm on the all too familiar color-coded plastic sensors of a Rock Band drum kit.

“What do you mean you can’t? Just stop the song, save your data, and let’s go.”

“You don’t understand. I’m in the middle of a song.” Tom Sawyer by RUSH to be exact. He was tackling Neil Peart. Not an easy task. I was impressed.

“What do you mean I don’t understand? Shut it off.” By now my impatience was noticeable. Wow, I lasted 10 seconds longer than my father if he had been in this same scenario. Progress I guess.

And then my 17-year-old nephew hit me with some cold hard facts without even knowing it… “You don’t understand… the guitar player is some guy in France, and the bass player is this girl in Japan.”

In my mind the aneurism that was forming just blew… “What did he just say?”

And there it was, sitting in my living room—a citizen of the digital age. He was connected to the world as if this was normal. Trained in virtualization, connected and involved in a world I was not even aware of!

My wife and I just looked at each other. This was the beginning of the work I do today. To get businesses to realize the world of the Digital Worker is a completely different world. This is a generation prepared to work in The Cloud Culture of the future.

A Quote from Liquid Leadership, Page 94, How Technology Influences Behavior…

In an article in the Atlantic magazine, writer Nicholas Carr (author of The Shallows: What the Internet Is Doing to Our Brains) cites sociologist Daniel Bell as claiming the following: “Whenever we begin to use ‘intellectual technologies’ such as computers (or video games)—tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies.

In other words, the technology we use changes our behavior!

There’s another important consideration about gamers and Digital Natives. As we stated in post 1, our focus for this series is not the average gamer; we are seeking the next generation of IT decision makers. Those people will be the true digital enthusiasts who have devoted even more energy to mastering the culture of gaming and understand intuitively how to win in the cloud.

“All your base belongs to us.”

Translation: If you’re not a gamer, can you work with Digital Natives?

Our goal for this series is to provide you with actionable insights that do not require rewriting how you work. We do not expect you to get a World of Warcraft subscription and try to catch up. If you already are one then we’ll help you cope with your Digital Immigrant coworkers.

In the next posts, we will explain four key culture differences between Digital Immigrants and Digital Natives. For each, we explore the basis for this belief and discuss how to facilitate Digital Natives decision-making processes.


Rob HirschfeldCloud Culture Series TL;DR? Generation Cloud Cheat sheet [Collaborative Series 2/8]

SUBTITLE: Your series is TOO LONG, I DID NOT READ It!

This post is #2 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Your attention is valuable to us! In this section, you will find the contents of this entire blog series distilled down into a flow chart and one-page table.  Our plan is to release one post each Wednesday at 1 pm ET.

Graphical table of contents

flow chartThe following flow chart is provided for readers who are looking to maximize the efficiency of their reading experience.

If you are unfamiliar with flow charts, simply enter at the top left oval. Diamonds are questions for you to choose between answers on the departing arrows. The curved bottom boxes are posts in the series.

Culture conflict table (the Red versus Blue game map)

Our fundamental challenge is that the cultures of Digital Immigrants and Natives are diametrically opposed.  The Culture Conflict Table, below, maps out the key concepts that we explore in depth during this blog series.

Digital Immigrants (N00Bs) Digital Natives (L33Ts)
Foundation: Each culture has different expectations in partners
  Obey Rules

They want us to prove we are worthy to achieve “trusted advisor” status.

They are seeking partners who fit within their existing business practices.

Test Boundaries

They want us to prove that we are innovative and flexible.

They are seeking partners who bring new ideas that improve their business.

  1. Organizational Hierarchy see No Spacesuits (Post 4)
  Permission Driven

Organizational Hierarchy is efficient

Feel important talking high in the org

Higher ranks can make commitments

Bosses make decisions (slowly)

Peer-to-Peer Driven

Organizational Hierarchy is limiting

Feel productive talking lower in the org

Lower ranks are more collaborative

Teams make decisions (quickly)

  1. Communication Patterns see MMOG as Job Training (Post 5)
  Formalized & Structured

Waits for Permission

Bounded & Linear

Requirements Focused

Questions are interruptions

Casual & Interrupting

Does NOT KNOW they need permission

Open Ended

Discovered & Listening

Questions show engagement

  1. Risks and Rewards see Level Up (Post 6)
  Obeys Rules

Avoid Risk—mistakes get you fired!

Wait and see

Fear of “looking foolish”

Breaks Rules

Embrace Risk—mistakes speed learning

Iterate to succeed

Risks get you “in the game”

  1. Building your Expertise see Becoming L33T (Post 7)
Knowledge is Concentrated

Expertise is hard to get (Diploma)

Keeps secrets (keys to success)

Quantitate—you can measure it

Knowledge is Distributed and Shared

Expertise is easy to get (Google)

Likes sharing to earn respect

Qualitative—trusts intuition

Hopefully, this condensed version got you thinking.  In the next post, we start to break this information down.

 

 


William LearaA Book Every BIOS Engineer Will Love

Vincent Zimmer published a blog post asking if there was a particular book that inspired your choice of profession.  For me, one of my favorite and most inspiring books is The Soul of a New Machine, by Tracy Kidder.  Here, I’m not alone—this book won the Pulitizer Prize in the early 1980s and is widely admired by many people, especially those who work at computer hardware companies.
imageThe book tells the story of Data General Corporation designing their first 32-bit minicomputer.  You may be thinking “that sounds like the dullest thing I can possibly think of”, but it’s a wonderful and entertaining story.  One of my favorite parts is in the Prologue.  (see, it gets good quickly!)

The Prologue begins with a story of five guys who go sailing in order to enjoy a short, stress-free, vacation.  Four are friends, but they needed a fifth, so they bring along an interested friend-of-a-friend:  Mr. Tom West.

Tom West is the book’s protagonist and the project leader of the aforementioned new Data General 32-bit minicomputer effort.  He became a hero to computer engineers after the publication of Soul of a New Machine.

But back to the sailboat—one evening, an unexpected storm assails the small boat.  The storm is unexpected in timing, and also unexpected in strength—these amateur sailors fear for their lives.  Tom West keeps his cool, takes charge, goes into action, and, to cut to the chase, the crew survives just fine.
Months after that sailing expedition, the captain, a member of the crew (who was a psychologist by profession), and the rest of the crew (sans West) are sitting around reminiscing:
The people who shared the journey remembered West.  The following winter, describing the nasty northeaster over dinner, the captain remarked, “That fellow West is a good man in a storm.”  The psychologist did not see West again, but remained curious about him.  “He didn’t sleep for four nights!  Four whole nights.”  And if that trip had been his idea of a vacation, where, the psychologist wanted to know, did he work?
And so the reader is launched into the riveting story of Data General creating the Eclipse MV/8000.  It’s a story of corporate intrigue, late nights, tough debugging sessions, colorful personalities, and, against all odds, ultimately a successful and satisfying product launch.

Chapter Nine is dedicated to Tom; his upbringing, his home, and his daily routine.  A funny Tom West anecdote:
Another story made the rounds:  that in turning down a suggestion that the group buy a new logic analyzer, West once said, “An analyzer costs ten thousand dollars.  Overtime for engineers is free.”
But the entire book isn’t just about Tom West.  It’s a beautifully crafted adventure story about how this group of eccentric hardware and firmware guys worked around the clock for over a year to produce a great machine.  An example chapter title:  The Case of the Missing NAND Gate. (!)

Wired magazine wrote a great article about the book.  Here’s a snippet:
More than a simple catalog of events or stale corporate history, Soul lays bare the life of the modern engineer - the egghead toiling and tinkering in the basement, forsaking a social life for a technical one. It's a glimpse into the mysterious motivations, the quiet revelations, and the spectacular devotions of engineers—and, in particular, of West. Here is the project's enigmatic, icy leader, the man whom one engineer calls the "prince of darkness," but who quietly and deliberately protects his team and his machine. Here is the raw conflict of a corporate environment, factions clawing for resources as West shields his crew from the political wars of attrition fought over every circuit board and mode bit. Here are the power plays, the passion, and the burnout - the inside tale of how it all unfolded.
Mr. West died in 2011 at the age of 71.

I cannot do justice to this book—PLEASE do yourself a favor and pick it up.  You will not regret it.

What about you?  Is there a book that inspired you, or continues to inspire you in your vocation?  Leave a comment!









William LearaWelcome!

I’m starting a new blog in order to discuss BIOS programming—the art and science of bootstrap firmware development for computers.  In addition, I expect to discuss general software development topics and my affinity for all things computer related.  My intent is to participate in the BIOS community, share what I’m learning, and learn from all of you.  I hope you will subscribe to the blog (via RSS or email) and use the commenting facility to discuss the content!

texas-277030_640










William LearaWill I Be Jailed For Saying “UEFI BIOS”?

To hear some people talk, it is a crime to say “UEFI BIOS”.  No, they insist, there was “BIOS”, which has been supplanted by “UEFI”, or “UEFI firmware”.
You do not have a ‘UEFI BIOS’. No-one has a ‘UEFI BIOS’. Please don’t ever say ‘UEFI BIOS’.
Microsoft, in particular, tries hard to drive home this distinction—that computers today have gotten rid of BIOS and now use UEFI.  The Wikipedia article on UEFI implies something similar.

Is this distinction helpful?  Is it accurate?  The fact of the matter is that from the earliest days of the microcomputer revolution, the mid-to-late 1970s, computers have required a bootstrap firmware program. Following the lead of Gary Kildall’s CP/M, this program was called the BIOS.  IBM introduced their PC in 1981 and continued to use the term BIOS.  Just because the industry has embraced a new standard, UEFI, does not mean that somehow the term “BIOS” refers to something else.  I know from my work experience as a BIOS developer that my colleagues and I use the term “UEFI BIOS”—we used to have Legacy BIOS, now we have UEFI BIOS.  It’s still the system’s bootstrap firmware.

Here’s an article from Darien Graham-Smith of PC Pro introducing UEFI and using the term “UEFI BIOS”:  http://www.pcpro.co.uk/features/381565/uefi-bios-explained

Let’s look to the real experts to see what they say—namely, Intel, the originators of the UEFI standard.  Intel dedicated an entire issue of the Intel Technology Journal (Volume 15, Issue 1) to UEFI.  In that journal, the term “UEFI BIOS” was used a total of six times.  Example:
The UEFI BIOS is gaining new capabilities because UEFI lowers the barrier to implementing new ideas that work on every PC.
This edition of the Intel Technology Journal was written by a veritable who’s who of the BIOS industry:  Intel, IBM, HP,Clipboarder.2014.08.06 (2) AMI, Phoenix Technologies, Lenovo, and Insyde, including some of the Founding Fathers of UEFI:  Vincent Zimmer and Michael Rothman.  If they did not see this term as incorrect, then neither should we.

While the UEFI Spec itself does not appear to use the term “UEFI BIOS”, it does use the term “Legacy BIOS” to refer to the older standard, which to me implies that UEFI is the new, non-legacy BIOS.

Anyway, this question is not likely to become one of the great debates of our time, but I propose that the term “UEFI BIOS” is perfectly acceptable.  Now, on to UEFI BIOS programming!








William LearaThe Case of the Mysterious __chkstk

I was making a small change to a function:  adding to it a couple UINTN auto variables, a new auto EFI_GUID variable, and a handful of changed lines.

Suddenly, the project would no longer compile.  I got this error message from the Microsoft linker:

TSEHooks.obj : error LNK2019: unresolved external symbol __chkstk referenced in function PostProcessKey

Build\TSE.dll : fatal error LNK1120: 1 unresolved externals

NMAKE : fatal error U1077: 'C:\WinDDK\7600.16385.1\bin\x86\amd64\LINK.EXE' : return code '0x460'
Stop.

======================
Build Error!!
======================

This surprised me—why is the linker complaining?  “unresolved external symbol”—I didn’t add a new function call, and neither did I add an extern reference.  Are my linker paths messed up somehow?  After burning lots of time trying various wild goose chases I started searching more for this “__chkstk”—what is that?

I started searching Google for help, and found a forum posting with the following comment:

The "chkstk" unresolved external is caused by the compiler checking to see if you've occupied more than (I think 4K on an x86 system) stack space for local variables…
Could I have pushed the function over the maximum stack space?  As I mentioned, I only added two UNITNs (8B each) and an EFI_GUID (16B) for 32B total.

Looking further I noticed that one of the already existing auto variables in this function was a SETUP_DATA structure variable—the variable type that holds all the BIOS Setup program settings information.  This was the problem—there are over 1200 variables contained in this one structure!

After further investigation, I found the following from Microsoft:

__chkstk Routine

Called by the compiler when you have more than one page of local variables in your function.

__chkstk Routine is a helper routine for the C compiler.  For x86 compilers, __chkstk Routine is called when the local variables exceed 4K bytes; for x64 compilers it is 8K.

My solution was going to be to move the SETUP_DATA variable to file scope with internal linkage, but to my surprise I found someone had already done that!  So, there was a file-scope SETUP_DATA variable, and then someone created another automatic SETUP_DATA variable within the scope of one of the functions.  Messy!  Anyway, it made my job easier—I simply removed the auto copy of SETUP_DATA and the linker error went away.

Two Takeaways

1) Microsoft, couldn’t there by a better message for communicating that the function has violated its stack space?  Something like:

Stackoverflow in function PostProcessKey:  Requested X bytes, maximum limit is 8192 bytes

rather than:

LNK2019: unresolved external symbol __chkstk referenced in function PostProcessKey

2) Developers, be on the lookout for usages of the BIOS Setup data structure.  I’m guessing it’s probably the largest of all the UEFI variables, and by a good margin.










Mark CathcartPower corrupts

Power corrupts; absolute power corrupts absolutely

Famously said by John Dalberg-Acton, the historian and moralist, first expressed this opinion in a letter to Bishop Mandell Creighton in 1887. I was reminded of it on Friday when it was announced that Governor Rick Perry of Texas had been indicted.

Abbott and PerryAlthough I’m clearly more of a social activist than Republican, Conservative, this post isn’t really about politics. It may or may not be that Perry has a case to answer. What is clear is that the lack of a term limit for the Governor of Texas has, as always, allowed the Governor to focus more on his succession, more on his politics, than the people that elected him and their needs.

I’m personally reminded of Margaret Thatcher, who enacted swathing changes in her time, but in her 3rd term, spent more time inward looking, in-fighting, that outward looking. More focused on those that would succeed her than what the country needed to succeed. Major, Howe, Heseltine, Lawson. et al.

jmmtThatcher these days is remembered mostly for consolidating her own power and the debacle that ended her reign rather than her true legacy, creating the housing crisis; and the banking crisis. Thatchers government started moving people to incapacity benefit rather than unemployment to hide the true state of the economy from the people. Blair, Brown, mostly the same, after a couple of years of shifting emphasis and politics it became the same farcical self protection.

And so it has become the same with Perry and his legacy. Irrespective of the merit of this indictment, what’s clear is that Perrys normal has changed to defending his legacy and Abbott. Abbott meanwhile moves to make as much as possible secret about Perrys activities. This includes the detail of Governor Perrys’ expense claims, sensitive, secret but not limited to that. Abbot also feels the location of chemical storage is also a threat to our liberty, and not to be easily publicly accessible. Redaction it would appear, is a lost art.

For the layman it is impossible to understand how/who/what of CPRIT affair is real. Was Abotts oversight of CPRIT politically motivated? Did Abbott really turn a blind eye to the goings on at CPRIT and did Perry and his staff know about and approve this?

British Prime Minister Tony Blair (L) anIf they did, then their pursuit of Lehmberg is bogus, their attempts to stop the Public Integrity Unit(PIU), self serving, And there is the rub, it really doesn’t matter if it was legal or not. Perry needs to go, term limits should mandate not more than two sessions, and Abbott should be seriously questioned about his motivation, otherwise as Thatcher goes, Major goes; as Blair goes, so Brown goes; As Perry goes, so Abbott goes, and the result of too much power be shared out as a grace and favor does no one, not least the local tax payers any good at all.

And for the record, Lehmbergs arrest for drink driving was shameful, and yes she should of resigned. But because she didn’t doesn’t make it OK for the Governor to abuse his power to try to remove her. Don’t let the Lehmberg arrest though distract from the real issue, abuse of power and term limits.


Gina MinksThe thing about it: it just sucks.

I know I’m really lucky. I have a job I like to do, great boss, great people to work with. It’s steady pay with good benefits. I have two awesome kids, I live in a great place. We all know #FredTheDog is the best dog in the entire world. My childhood had issues – goodness knows nothing like many of my friends. Again, lucky. My parents didn’t do drugs or drink, mostly because they were

read more here

Hollis Tibbetts (Ulitzer)ARM Server to Transform Cloud and Big Data to "Internet of Things"

A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some years to come - growing to over 20% of the server market by 2016 according to Oppenheimer ("Cloudy With A Chance of ARM" Oppenheimer Equity Research Industry Report).

read more

Rob HirschfeldYour baby is ugly! Picking which code is required for Commercial Core.

babyThere’s no point in sugar-coating this: selecting API and code sections for core requires making hard choices and saying no.  DefCore makes this fair by 1) defining principles for selection, 2) going slooooowly to limit surprises and 3) being transparent in operation.  When you’re telling someone who their baby is not handsome enough you’d better be able to explain why.

The truth is that from DefCore’s perspective, all babies are ugly.  If we are seeking stability and interoperability, then we’re looking for adults not babies or adolescents.

Explaining why is exactly what DefCore does by defining criteria and principles for our decisions.  When we do it right, it also drives a positive feedback loop in the community because the purpose of designated sections is to give clear guidance to commercial contributors where we expect them to be contributing upstream.  By making this code required for Core, we are incenting OpenStack vendors to collaborate on the features and quality of these sections.

This does not lessen the undesignated sections!  Contributions in those areas are vital to innovation; however, they are, by design, more dynamic, specialized or single vendor than the designated areas.

Designated SectionsThe seven principles of designated sections (see my post with TC member Michael Still) as defined by the Technical Committee are:

Should be DESIGNATED:

  1. code provides the project external REST API, or
  2. code is shared and provides common functionality for all options, or
  3. code implements logic that is critical for cross-platform operation

Should NOT be DESIGNATED:

  1. code interfaces to vendor-specific functions, or
  2. project design explicitly intended this section to be replaceable, or
  3. code extends the project external REST API in a new or different way, or
  4. code is being deprecated

While the seven principles inform our choices, DefCore needs some clarifications to ensure we can complete the work in a timely, fair and practical way.  Here are our additions:

8.     UNdesignated by Default

  • Unless code is designated, it is assumed to be undesignated.
  • This aligns with the Apache license.
  • We have a preference for smaller core.

9.      Designated by Consensus

  • If the community cannot reach a consensus about designation then it is considered undesignated.
  • Time to reach consensus will be short: days, not months
  • Except obvious trolling, this prevents endless wrangling.
  • If there’s a difference of opinion then the safe choice is undesignated.

10.      Designated is Guidance

  • Loose descriptions of designated sections are acceptable.
  • The goal is guidance on where we want upstream contributions not a code inspection police state.
  • Guidance will be revised per release as part of the DefCore process.

In my next DefCore post, I’ll review how these 10 principles are applied to the Havana release that is going through community review before Board approval.


Ravikanth ChagantiTransforming the Data Center – Bangalore, India

Microsoft MVP community, Bangalore IT Pro, Bangalore PowerShell User Group, and Microsoft are proud to announce the Transform Data Center (in-person) event in Bangalore, India. This event is hosted at the Microsoft Office in Bangalore. Registration (limited seats): https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032592541&culture=en-IN I will speaking here on Azure Backup and Azure Hyper-V Recovery Manager. Deepak Dhami (PowerShell MVP) will…

Rob HirschfeldCloud Culture: New IT leaders are transforming the way we create and purchase technology. [Collaborative Series 1/8]

Subtitle: Why L33Ts don’t buy from N00Bs

Brad Szollose and I want to engage you in a discussion about how culture shapes technology [cross post link].  We connected over Brad’s best-selling book, Liquid Leadership, and we’ve been geeking about cultural impacts in tech since 2011.

Rob Hirschfeld

Rob

Brad

In these 8 posts, we explore what drives the next generation of IT decision makers starting from the framework of Millennials and Boomers.  Recently, we’ve seen that these “age based generations” are artificially limiting; however, they provide a workable context this series that we will revisit in the future.

Our target is leaders who were raised with computers as Digital Natives. They approach business decisions from a new perspective that has been honed by thousands of hours of interactive games, collaboration with global communities, and intuitive mastery of all things digital.

The members of this “Generation Cloud” are not just more comfortable with technology; they use it differently and interact with each other in highly connected communities. They function easily with minimal supervision, self-organize into diverse teams, dive into new situations, take risks easily, and adapt strategies fluidly. Using cloud technologies and computer games, they have become very effective winners.

In this series, we examine three key aspects of next-generation leaders and offer five points to get to the top of your game. Our goal is to find, nurture, and collaborate with them because they are rewriting the script for success.

We have seen that there is a technology-driven culture change that is reshaping how business is being practiced.  Let’s dig in!

What is Liquid Leadership?

“a fluid style of leadership that continuously sustains the flow of ideas in an organization in order to create opportunities in an ever-shifting marketplace.”

Forever Learning?

In his groundbreaking 1970s book, Future Shock, Alvin Toffler pointed out that in the not too distant future, technology would inundate the human race with all its demands, overwhelming those not prepared for it. He compared this overwhelming feeling to culture shock.

Welcome to the future!

Part of the journey in discussing this topic is to embrace the digital lexicon. To help with translations we are offering numerous subtitles and sidebars. For example, the subtitle “L33Ts don’t buy from N00Bs” translates to “Digital elites don’t buy from technical newcomers.”

Loosen your tie and relax; we’re going to have some fun together.  We’ve got 7 more posts in this cloud culture series.  

We’ve also included more background about the series and authors…

Story Time: When Rob was followed out of the room

Culture is not about graphs and numbers, it’s about people and stories. So we begin by retelling the event that sparked Rob’s realization that selling next-generation technology like cloud is not about the technology but the culture of the customer.

A few years ago, I (Rob) was asked to join an executive briefing to present our, at the time, nascent OpenStack™ Powered Cloud solution to a longtime customer. As a non-profit with a huge Web presence, the customer was in an elite class and rated high ranking presenters with highly refined PowerPoint decks; unfortunately, these executive presentations also tend to be very formal and scripted. By the time I entered late in the day, the members of the audience were looking fatigued and grumpy. 

Unlike other presenters, I didn’t have prepared slides, scripted demos, or even a fully working product. Even worse, the customer was known as highly technical and impatient. Frankly, the sales team was already making contingency plans and lining up a backup presenter when the customer chewed me up and spit me out. Given all these deficits, my only strategy was to ask questions and rely on my experience.

That strategy was a game changer.

My opening question (about DevOps) completely changed the dynamic. Throughout our entire presentation, I was the first presenter ready to collaborate with them in real time about their technology environment. They were not looking for answers; they wanted a discussion about the dynamics of the market with an expert who was also in the field.

We went back and forth about DevOps, OpenStack, and cloud technologies for the next hour. For some points, I was the expert with specific technical details. For others, they shared their deep expertise and challenges on running a top Web property. It was a conversation in which Dell demonstrated we had the collaboration and innovation that this customer was looking for in a technology partner.

When my slot was over, they left the next speaker standing alone following me out of the room to continue the discussion. It was not the product that excited them; it was that had I addressed them according to their internal cultural norms, and immediately they noticed the difference.
What is DevOps?

DevOps (from merging Development and Operations) is a paradigm shift for information technology. Our objective is to eliminate the barriers between creating software and delivering it to the data center. The result is that value created by software engineers gets to market more quickly with higher quality.

This level of reaction caught us by surprise at the time, but it makes perfect sense looking back with a cultural lens. It wasn’t that Rob was some sort of superstar—those who know him know that he’s too mild-mannered for that (according to Brad, at least). What has caused the excitement was Rob had hit their cultural engagement hot button!

Our point of view: About the authors

Rob Hirschfeld and Brad Szollose are both proud technology geeks, but they’re geeks from different generations who enjoy each other’s perspective on this brave new world.

Rob is a first-generation Digital Native. He grew up in Baltimore reprogramming anything with a keyboard—from a Casio VL-Tone and beyond. In 2000, he learned about server virtualization and never looked back. In 2008, he realized his teen ambition to convert a gas car to run electric (a.k.a. RAVolt.com). Today, from his Dell offices and local coffee shops, he creates highly disruptive open source cloud technologies for Dell’s customers.

Brad is a Cusp Baby Boomer who grew up watching the original Star Trek series, secretly wishing he would be commanding a Constitution Class Starship in the not-too-distant future. Since that would take a while, Brad became a technology-driven creative director who cofounded one of the very first Internet development agencies during the dot-com boom. As a Web pioneer, Brad was forced to invent a new management model that engaged the first wave of Digital Workers. Today, Brad helps organizations like Dell close the digital divide by understanding it as a cultural divide created by new tech-savvy workers … and customers.

Beyond the fun of understanding each other better, we are collaborating on this white paper for different reasons.

  • Brad is fostering liquid leaders who have the vision to span cultures and to close the gap between cultures.
  • Rob is building communities with the vision to use cloud products that fit the Digital Native culture.

Kevin HoustonWhy Dell’s PowerEdge VRTX is Ideal for Virtualization

I recently had a customer looking for 32 Ethernet ports on a 4 server system to drive a virtualization platform.  At 8 x 1GbE per compute node, this was a typical VMware virtualization platform (they had not moved to 10GbE yet) but it’s not an easy task to perform on blade servers – however the Dell PowerEdge VRTX is an ideal platform, especially for remote locations.

VRTX_Max_NICsThe Dell PowerEdge VRTX infrastructure holds up to 4 compute nodes and allows for up to 8 x PCIe cards.  The unique design of the Dell PowerEdge VRTX allows a user to run up to 12 x 1GbE NICs per server by using a 4 x 10GbE Network Daughter Card on the Dell PowerEdge M620 blade server and then adding in two 4-port 1GbE NICs into the PCIe slots.  The 4 x 1GbE NICs via the LAN on Motherboard plus 8 x 1GbE ports via the PCIe cards offers a total of 12 x 1GbE NICs – per compute node (see image for details) – which should be more than enough for any virtualization environment.  As an added benefit, since the onboard LOM is a 1/10GbE card users will be able to seamlessly upgrade to 10GbE by simply replacing the 1GbE switch with a 10GbE when it becomes available later this year.

If you have a remote environment, or even a project that needs dedicated server/storage/networking, I encourage you to take a look at the Dell PowerEdge VRTX.  It’s pretty cool, and odds are, your Dell rep can help you try one out at no charge.

For full details on the Dell PowerEdge VRTX, check out this blog post I wrote in June 2013.

 

Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Footnotes