dellites.me

Dell TechCenterThe New Standard for Identity, Data and Access Governance

If you read Chapter One of Identity and Access Management for the Real World, you learned of a proposed “maturity model” for IAM. Just to summarize, I likened your IAM journey to Maslow’s Hierarchy of Human Needs with the pinnacle being governance. 

Until access, security, control, and management are taken care of, governance is a near impossibility. That’s why the governance chapter of IAM for the Real World follows the access management chapter. If you are struggling to manage access, you will REALLY struggle to achieve governance.

So what is identity governance?

There are all kinds of technical and jargon-laden definitions, but I like to describe it this way: Governance is making sure you do things right. So from an identity governance standpoint it means making sure that the right people, have the right access, to the right stuff, in the right way, with all the other right  people saying that it is OK that it’s happening that way.

And then there’s what should be governed. There are three major categories where identity and access governance (IAG) come into play, and no project is complete if all three are not addressed.

  1. End user access to applications

  2. End user access to data

  3. Administrator access to privileged accounts

But it gets hard when you start to consider what all of those rights actually mean in your real world. And it gets really hard when you have to define right over and over again, for the same people but on different systems or for different access scenarios. All of a sudden right might not be quite so right. And when governance is an afterthought – tacked on after the fact – it becomes just another area of additional complexity, raising costs, and potential failure. On the other hand if your access management tasks are performed with a governance mindset, and with governance-enabled tools, the journey up the pyramid is simple and painless. Watch this video on how line-of-business personnel can actually be at the front lines of governance through attestation/recertification.

Put in simple terms, if provisioning is done without an eye towards governance or governance is imposed on an existing and flawed provisioning implementation, you’re in for a bumpy ride. And if your access governance solution can’t cover data and/or privileged accounts, that’s just one more layer of technology and one more solution that must be deployed, supported, and paid for.

So governance for the real world considers all the rights that are in play across all access types, all user populations, and all systems. And it is tightly coupled with the foundation for everything – provisioning. Watch this video to learn more about provisioning and governance. Here’s a short video on the Dell One Identity approach to identity governance

To learn more about this real-world approach to governance, download and read Identity and Access Management for the Real World: Identity Governance.

  

Dell TechCenterDell Named 2014 Education Partner of the Year by Google for Education

At Dell, we are committed to empowering educators and students with technology to personalize learning and develop the critical thinking skills they need for college and career. Realizing this goal requires a thoughtful, end-to-end IT strategy and close collaboration with industry partners to offer schools a robust portfolio of productive, affordable and flexible IT solutions specifically designed for education. Our latest education solutions portfolio launch exemplifies this approach, including the second generation of the Dell Chromebook 11 with new features to advance digital learning while maximizing school budgets. And, with our new Dell KACE K1000 Systems Management Appliance, we offer first-to-market management support for Chromebooks, making it even easier to administer, secure and manage all Chromebook devices.

That’s why I am thrilled to share that Google for Work & Education recognized Dell as the 2014 Education Partner of the Year at TeamWork 2015, the annual global partner summit which took place in San Diego, CA. This award demonstrates Dell’s unwavering focus on education customers and strong collaboration with Google to enable schools to deploy, manage and support Dell Chromebooks successfully.

After all, working together to help schools innovate learning that is really what is most important about our partnership. We’ve seen some incredible results from school districts across the country that leverage Dell Chromebooks to increase student access to technology, meet online testing requirements and support unique learning needs.

For example, Hesperia Unified School District deployed more than 19,000 Dell Chromebooks to enhance learning for the district’s students in grades 2-12. The district was able to save more than $3 million by selecting the Dell Chromebook over traditional textbooks and plans to use the devices to enhance classroom instruction while also preparing for Smarter Balanced Assessment testing. 

"Working every day on the same device that will be used to take the Smarter Balanced Assessment will definitely benefit our students and over time, we anticipate that teachers will develop computer-enhanced activities that complement the Smarter Balanced questions and even better prepare our students for the test, " according to Director of K-12 Programs and Projects Darrel Nickolaisen.

More recently, 100 students in the New Buffalo Area School district received Dell Chromebooks to transition from computer labs toward mobile learning environments. Chillicothe high school students “lit up” upon receiving Dell Chromebook 11 laptops in January and are excited to collaborate on projects using their devices. And, Dare County Schools’ Digital Learning Initiative deployed the Dell Chromebook 11 to take their schools into the 21st Century.

These are just a few of the inspiring stories that demonstrate the impact of our partnership in education. At Dell we remain steadfast to meeting the needs of our education customers, with open, affordable and manageable solutions.  We are excited to build upon this success so that more students and teachers everywhere have the opportunity to realize technology’s learning potential.

Dell TechCenterAnatomy of an Insider Threat: A Randy Franklin Smith On-demand Webcast @Dell_WM

Last week, almost 1,400 of your peers watched a demonstration of an inside data breach, and now you can watch it, too ! A recent Gartner report states that organizations have put a lot of resources into preventing breaches, but that they should now...(read more)

William LearaNIST 800-155: BIOS Integrity Measurement Guidelines

imageIn an article last week, I talked about NIST document 800-147, a set of guidelines for BIOS developers to create a secure BIOS flash update process.

Related to 800-147 and building upon its foundation is another NIST spec, 800-155.

From NIST:

SP 800-155

DRAFT BIOS Integrity Measurement Guidelines

NIST announces the public comment release of NIST Special Publication 800-155, BIOS Integrity Measurement Guidelines. This document outlines the security components and security guidelines needed to establish a secure Basic Input/Output System (BIOS) integrity measurement and reporting chain. BIOS is a critical security component in systems due to its unique and privileged position within the personal computer (PC) architecture. A malicious or outdated BIOS could allow or be part of a sophisticated, targeted attack on an organization —either a permanent denial of service (if the BIOS is corrupted) or a persistent malware presence (if the BIOS is implanted with malware). The guidelines in this document are intended to facilitate the development of products that can detect problems with the BIOS so that organizations can take appropriate remedial action to prevent or limit harm. The security controls and procedures specified in this document are oriented to desktops and laptops deployed in an enterprise environment.
 
The public comment period closed on January 20, 2012.

Although still labeled “draft”, I am not aware of any update to 800-155 since its original release and public comment period.  I believe the impetus for the creation of 800-155 was:

In September, 2011, a security company discovered the first malware designed to infect the BIOS, called Mebromi. "We believe this is an emerging threat area," said Regenscheid. These developments underscore the importance of detecting changes to the BIOS code and configurations, and why monitoring BIOS integrity is an important element of security.

What is 800-155 About?

Today, enterprise software vendors sell corporate anti-virus products.  These products inventory all the PCs in an organization, run anti-virus scans, and report the results to the IT administrator via a centralized management application.  The IT administrator can thus manage the network by quarantining infected machines and then cleaning any virus/malware detected.

In a similar fashion, 800-155 lays out guidelines for creating an enterprise software system that communicates with all the PCs on the network, but, rather than scanning for viruses at the OS level, the system compares BIOS measurements to known-good “golden” measurements.  If the comparison fails on a particular PC, the system BIOS has been compromised and the IT administrator can take action.

By “measurement”, we’re talking about taking a cryptographic hash of BIOS code or data before it’s run or read.  Only if the hash matches its expected value will the system continue with the execute or read operation.  This is a big topic, and you’ve perhaps heard of Intel’s Trusted Execution Technology. (TxT) Read the TxT Wikipedia article to get a better idea of what 800-155 means by “measurement”.

 

Components of the System

I would summarize the specification by classifying its contents into the following concepts:

  • Various Roots of Trust (RoT) are discussed.  A RoT is a known-good and inviolable foundation from which the rest of a secure system can be based.
    • RoT – Measurement is about providing a cryptographic processor to make reliable integrity measurements
    • RoT – Storage is about providing a tamper-proof store of the integrity measurements, in proper sequence.
    • RoT – Reporting is transmitting the results of the integrity measurements from the PC to the management application in a secure manner.
  • BIOS Boot Code integrity
    • The integrity of the BIOS boot block, POST code, ACPI code (among other possibilities) is measured and verified.
  • BIOS Data integrity
    • The integrity of the BIOS configuration is likewise measured and verified.  Example:  BIOS Setup options.
  • Measurement Authority
    • The management application used by the IT administrator to manage the enterprise.
    • Communicates with the PCs on the network with hashes and nonces to guarantee accurate and fresh data.
  • Golden Measurements
    • The management application stores the expected (called “golden”) measurements.  These get compared to the measurements sent in over the network from each PC.
    • The golden measurements typically would come from the PC OEM (Dell, HP, etc.).
  • Remediation
    • Quarantining infected systems by removing them to a restricted, locked-down part of the network.
    • Fixing the infected system via a BIOS update or reconfiguration.

The appendices go on to propose various options for implementing the roots of trust, and also possible ways the measurement system could be implemented in a real-world enterprise.

 

Conclusion

I can only give an overview of 800-155 here, and encourage you to read the paper yourself.  The paper touches on many, many security technologies, as evidenced by the extensive References section in the back.  Examples of prerequisite technologies presumed by 800-155:  Trusted Computing and the work of the Trusted Computing Group; digital signatures, SMBIOS, Trusted Platform Modules (TPM), SMM mode… plus more.  It’s all interesting reading for any BIOS security developer, and I hope I’ve whetted your appetite for checking out the paper.

Download from NIST here:

http://csrc.nist.gov/publications/drafts/800-155/draft-SP800-155_Dec2011.pdf

Derik PereiraTo read or not to read

I think we ought to read only the kind of books that wound or stab us. If the book we’re reading doesn’t wake us up with a blow to the head, what are we reading for? So that it will make us happy, as you write? Good Lord, we would be happy precisely if we had no books, and the kind of books that make us happy are the kind we could write ourselves if we had to. But we need books that affect us like a disaster, that grieve us deeply, like the death of someone we loved more than ourselves, like being banished into forests far from everyone, like a suicide. A book must be the axe for the frozen sea within us. That is my belief.

so said Kafka

Kevin HoustonBlade Server Comparisons – March 2015

It’s always good to have a single source of comparisons for the Tier 1 blade servers, so here is the updated list.  I realize that I should have posted this in December, but I guess it’s better late than never.  As an FYI – expect an update to this on April 7th (hint, hint.)

x86-Blade-Server-Comparison-2.28.15_BladesMadeSimple_com

Here is the full PDF:

BladesMadeSimple.com_x86_Blade_Server_Comparison(2.28.15)

Please let me know if you spot any discrepancies.  Thanks for your support and assistance!

 
Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 18 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

 

 

Dell TechCenterMigrating #TheDress Images from IBM Notes to SharePoint

We like to show how good our Migrator for Notes to SharePoint migration tool is at migrating content. Fidelity of content migration is important to us and our customers. So I created two new documents in one of our test Notes databases. One document...(read more)

Dell TechCenterSeeking Analysts’ Advice on Cloud Choices

According to a recent Gartner Report1, hosted private cloud usage will more than double in the next three years. And many companies will use Forrester (and other analyst firms) to provide independent advice to help validate choices and strategies for these types of cloud deployments. In fact, companies selling software, hardware, telecom or services put huge resources into marketing and sales promotion. That’s why analysts not only advise businesses about what technology they should buy, but also recommend suppliers.

It’s a good idea. Businesses that buy technology — especially those making large investments — are very likely to use technology analyst firms. Not only do four out of five Fortune 2000 companies buy analyst services, but two out of every five enterprises with revenue between $100 million and $500 million take advice from independent firms before making their decisions.

To provide our customers with information and analysis from one of the industry’s most recognized analysts, Forrester Research, Dell invites you to join us for the Cloud Insight Series featuring Forrester Research, a quarterly, LiveStream broadcast, free to Dell customers and invited guests.

Each broadcast will feature Ms. Lauren Nelson, Infrastructure-as-a-Service (IaaS) cloud lead and analyst at Forrester Research. She’ll discuss how your organization can increase innovation and agility in the cloud age. Lauren is the primary author of a Forrester Wave™ report focusing on hosted private cloud solutions and how to select the right partner for your cloud initiatives.

The broadcasts are sponsored by industry-leading companies, including Intel, Amazon Web Services, VMware and Microsoft Azure.

Our first industry expert will be Mr. Gerald Seaman, the Senior Cloud Marketing Manager in the Data Center Group at Intel. In that role, Mr. Seaman focuses on bringing complete secure hybrid cloud solutions to enterprise customers. Mr. Seaman is currently engaged in multiple programs to create solutions that maximize business velocity, security and value of the cloud for large- and medium-sized companies.

This useful, non-sales, informational broadcast will include a live Twitter feed @cloudseries, #ForresterIntel, and will be presented twice on March 19:

  • Live Broadcast 1:  8:30am EST,  London: 1:30pm GMT,  Paris: 2:30pm CET
  • Live Broadcast 2:  11:30am EST (8:30am PST) London: 4:30pm GMT Paris: 5:30pm CET

To request an invitation to the Forrester Cloud Insight Series, please send an email to Cloud_Insight@dell.com , and you can also register directly for the March 19th event by clicking here. For more information on Dell Cloud Services, please visit dell.com/dellcloudondemand.

  1. Cloud Services Providers Must Understand Deployment, Adoption and Buyer Complexity to Leverage Cloud Revenue Opportunities, Gartner, 1/13/2015 

Dell TechCenterSay hello at Atmosphere 2015!


Ed. Note: This post was co-authored by Jeremy Erwin, Dell Networking

At Dell, we’re always keeping our customers ahead of the curve with next-generation products and solutions. That’s why we’re excited to be a part of Atmosphere 2015, where this year’s focus is on the transformation #GenMobile, a new generation of users armed with mobile devices, will have on the future or every aspect of work and personal communication.  

Held at the Cosmopolitan of Las Vegas, March 1-6, Atmosphere 2015 brings together over 2,000 leading mobility and WiFi experts, as well as resellers, business executives media and industry analysts. As a silver sponsor, we’ll be on hand to showcase our wired and wireless products on the event floor (booth 9). We’ll also be hosting a speaking session covering our next-generation campus solutions aimed at helping our customers optimize their networking infrastructure.

Session details: Dell solutions for a Modern Campus – Broadening the play
Engage your customers on end-to-end solutions for their modern campuses. With Wired or Wireless, UC&C or BYOD, the demands on end user access networks are growing fast. IT managers must support rapidly increasing users and devices with access to richer and demanding content. To keep up, IT managers need access to comprehensive solutions, while enabling flexibility for growth. 

Attendees will learn about Dell’s comprehensive, future ready campus solutions aimed at modernizing your customer’s networks. This session will spark a broader conversation; allow attendees to understand the challenges their customers face, and learn more about the Dell solutions that deliver optimized infrastructure.

Dell on the show floor (booth 9): Our BYOD solutions will be an integral part of our discussion at Atmosphere 2015. With a recent upgrade to our India Operations HQ in Bangalore that provided a 300 percent performance boost and reduced the BYOD and guest onboarding time from 3 hours to 2 minutes, the Dell Networking team has a wealth of best practices to share. If you’re attending this year’s conference, please stop by booth 9 to learn more about BYOD and our future-ready networking solutions.

Learn more about ClearPass guest and BYOD, check out the “Delivering enhanced BYOD infrastructure” white paper, or our recent Power Solutions article. Follow us at @DellNetworking on Twitter to stay updated.

Dell TechCenterHow Dell Digital Business Services is Setting the Transformation Agenda

As organizations continue to transform and adopt a business-first approach, using a digital medium is a great way to explain Dell’s Digital Business Services philosophy and show how it works in certain industries. In an earlier post, I discussed how Advanced Analytics was helping the digital transformation agenda. Building on that foundation, this video expands on how Digital Business Services enables new business models for our customers, drives a superior customer experience and provides opportunities for stronger employee engagement.

(Please visit the site to view this video)

Dell Services’ consulting–led approach leverages a “Five R” methodology. We build upon the rich legacy that Dell has with social media, analytics and ecommerce to deliver the right industry context and user experience.

Our pioneering intellectual property—the Digital Industry maps—work across many industries. Dell is making new in-roads in digital transformation based on customer momentum, analysts’ recognition and the alliance partnerships we’ve established. As you can see, we’re very excited about digital transformation, and we look forward to working with you.

Dell TechCenterHow Did Noritz America Plan Their Backup and Recovery? By Thinking Like a CIO

I don’t live in New England, yet I am looking out my window at a LOT of snow right now, with more predicted this evening.  Now I love winter and snow, but really only for a short period of time. At some point, I get really cold and when I get cold, I get really crabby.  That’s when I start dreaming of all things warm:  hot chocolate, cozy blankets, my beat-up toasty slippers, and of course hot showers and baths.

Speaking of hot water, if you’ve never experienced tankless water heaters, it’s a bit mind blowing.  Energy saving. Very little space. Fresh water on demand without it sitting in a tank for hours on end getting stale. Noritz America sells, installs, and services more than 20 models of tankless water heaters and does a rousing business all across North America, through a network of more than 1,200 wholesalers, mostly plumbers. These folks know their hot water - literally.

But Noritz’s small IT department didn’t want end up in (figurative) hot water and put on their thinking caps to address concerns about potential disruptions to channel fulfillment and support capabilities due to lost or inaccessible data.  Because they are headquartered in California between two major fault lines, Noritz was also concerned about data loss and business disruption due to a major earthquake. Taken together, the IT staff decided to ‘Think Like a CIO’ and do some serious disaster recovery planning for their hybrid physical and virtual environment.

To keep the business running smoothly, Noritz has its critical customer relationship management and enterprise resource planning applications with associated data virtualized and hosted in servers that are colocated in a third-party facility several miles away. But with their CIO hats on, they looked more closely at their business goals and needs for data protection, realizing that a physical data center with replicated data was only one part of the puzzle.

“To complete our disaster recovery and business continuity models,” explains Sara Frautschy, IT Manager, Noritz America, “we needed a place where we could back up and replicate our data in case we get hit with a big quake and our colocated data center goes down.”

So they took things to the cloud as a final piece of their business continuity plan. They were able to design a cloud-based data backup, replication and recovery solution that met their business needs.

Thinking like a CIO, Noritz mapped out specific goals for their unique environment, including:

• Laying down the Recovery Time Objective (RTO) to get back online as quickly

as possible

• Choosing a Recovery Point Objective (RPO), so the least amount of data was at risk

• How they’d deal with both day-to-day single-item restoration as well as complete disaster recovery (especially in an earthquake zone)

• Makes their disaster recovery plans testable on a regular basis

• Supports flexible disaster recovery options, including P2V, V2V, V2P, P2P, and now their cloud-based x2V

• Minimizes storage requirements, as well as staff management time investment

Thinking Like a CIO can keep you out of hot water, or in Noritz’s case – keep them in the hot water business, but out of the hot water of a big IT disaster.

What do you think?  How is your BUDR planning going?  Find out more about how YOU can Think like a CIO in your own IT environment.

 

Dell TechCenterACCelerating Learning for STEM Students

Austin Community College (ACC) is rethinking how it is engaging students by personalizing the learning experience to prepare for the challenges of the 21st-century workforce. In the fall, ACC opened its beautiful, new campus at the old Highland Mall in central Austin.  Dell partnered with ACC to create a digital-learning environment that supports students and faculty with the tools and access they need to be successful.

ACC concurrently launched a new campus that reinvented the idea of a college campus, introduced cutting-edge technology, and overhauled a core curriculum using adaptive learning.  The campus opened with a headcount of 3,700 students.  The ACCelerator was the home of 46 sections – 750 students – for the redesigned developmental math course known as MATD 0421. 

The 30,000 square foot ACCelerator, the centerpiece of the new campus, provides access to 600+ virtualized Dell Wyse thin clients for individualized learning and small group sessions.  It also contains small group study rooms, tutoring, classrooms, and academic coaching.  Instructors from all areas take advantage of being able to schedule on-demand space for hands-on, collaborative activities using the computer resources to facilitate learning.   

Throughout the space, there are over 360 Dell Optiplex, 140 Latitude laptops, an additional 200 Wyse stations and 105 Venue 7 tablets that will run the digital library and be able to be checked-out by students and used as e-readers. Dell is also powering the back-end VMWare View VDI environment, powered by PowerEdge R720 servers, Force 10 switches, and runs off an EqualLogic PS6210XS SAN.

The new lab will give students access to campus resources and applications from anywhere through desktop virtualization technology and Dell Wyse thin and zero clients and a robust network infrastructure. Dell is proud to help ACC support the advancement of STEM for Austin Community College students with flexible, cutting-edge technology that encourages new forms of personalized and engaged learning.  Check out the new video on the ACCelerator here.

(Please visit the site to view this video)

And, on March 9, hear ACC students, faculty, and administrators at SXSWedu explain how the ACCelerator is changing teaching and learning. The link to the session is here.  

Dell TechCenterManufacturing is Positioned to Reap the Rewards of Big Data

The manufacturing industry has a plethora of new revenue streams available to it because of its adoption of big data analytics.(read more)

Dell TechCenterThe University of Iowa Hospitals Predictive Analytics to Slash Infection Rates

The University of Iowa Hospitals and Clinics are using predictive analytics to dramatically decrease post-surgical infections.(read more)

Dell TechCenterManufacturers and Big Data Analytics: Strange Bedfellows?

During the Super Bowl XLIX telecast earlier this month, I was captivated by a commercial from MacNeil Automotive Products, not because I’m in the market for new car accessories. Rather, I was blown away that the little-known maker of WeatherTech all-weather floor mats and cargo liners would pay an estimated $4 million for a 30-second spot alongside big-name brands, such as Coca-Cola, Snickers, Doritos and Budweiser.

image of football player catching ball with an overlay of zeros and ones to symbolize dataI couldn’t help but think that somehow big data analytics played a role in the company’s decision to spend big bucks to air a commercial during the Super Bowl. If so, this is a great example of how data can help organizations of all types and sizes take advantage of “in the spotlight” opportunities. In this particular case, providing the insight necessary to justify an investment that represented roughly 10 percent of the company’s purported annual advertising budget.

At first, this scenario seemed like a pairing of strange bedfellows till I realized that manufacturers actually have an edge over other industries, as this sector was among the first to make widespread data collection a standard practice. MacNeil has long had the ability to track and monitor much of what’s happening on its production line, so it makes sense that the company would be well positioned to know what happens to its products after they’ve come off the assembly line.

I’m sure big data provided some assurances to MacNeil that investing in a Super Bowl commercial made good fiscal sense. And, from what I’ve read about this so far, it did. As founder David MacNeil explained in a Forbes article that appeared right after this year’s Big Game, the decision to run a commercial was driven in part by the success of last year’s Super Bowl ad.

That’s right, the “little automotive products engine that could” debuted its first Super Bowl commercial in 2014; experiencing an 80 percent increase in website traffic and 57 percent jump in calls to its 800 number in the month following the placement. So running another ad in the Super Bowl was a smart business decision. And, based on the buzz created by this year’s ad, which reinforced the company’s heavily patriotic “Made in America” mantra, I predict WeatherTech products will be back next year for a three-peat!

What’s most interesting to me about this example is how this successful use case of big data analytics didn’t come from a really big manufacturer, just a really smart one.

There are endless opportunities to mine both big and small amounts of data to produce meaningful, actionable business insight. Another prime example that comes to mind involves Polyform U.S. Ltd., the world’s leading manufacturer of marine products, including buoys, fenders and mooring accessories.

A customer of Dell Software’s big data analytics solutions, Polyform accesses a slew of different data sources while taking advantage of Hadoop.

“We can easily access, connect and visualize all kinds of data to attain a complete, global view of raw materials, competitive pricing and everything else that goes into making and marketing our world-class products,” says Art Kuntz, IT manager for Polyform U.S. Ltd.

As a result, Polyform can easily attain a complete global view of every facet of its business, which has translated into increased automation, productivity and operational efficiencies. These benefits are within reach of any organization that understands the value of its data as well as the ability to collect and analyze it. Again, here’s where I think manufacturers, in particular, are well positioned to capitalize on a rich data collection heritage to drive new revenue streams through cross-selling and up-selling.

As a crafting enthusiast, I’m struck by the seemingly endless opportunities for makers of just about everything from paper to metalwork to reach new legions of customers with unique products to fuel a favorite pastime. I’m probably stating the obvious, but there’s a very specific demographic spending lots of dollars in this area and manufacturers can readily figure this out through data collected from crowdsourcing or targeted marketing efforts.

Having access to vast amounts of usage data that’s been sliced, diced and analyzed is the best way to improve customer experiences while revealing new revenue streams. Additionally, it’s the best route to forecasting which products are likely to gain rapid adoption.

In the future, I expect to see manufacturers lead the way in pre-emptively seeding the market with new products based on an analysis of social media metrics, among other data sources. Companies across all sectors can learn from the manufacturing industry, which traditionally has done a good job of staying in tune with their customers’ wants and needs while utilizing collective customer intelligence to inform product development.

In that sense, it’s not strange at all to find manufacturers taking advantage of big data analytics to augment and amplify business direction. In the future, I expect we’ll see many more examples of companies like MacNeil Automotive Products and Polyform along with natural synergies between scores of seemingly strange bedfellows.

What new partnerships do you think will be proven in time? Drop me a line at Joanna.schloss@software.dell.com for continued commentary on the new alliances that will come from a need and respect for big data analytics.

Dell TechCenterCapitalize on mobile moments to increase workforce productivity, part 2

Your employees are undergoing a mobile mind shift: The more their mobile devices solve problems for them, the more these employees will expect the devices to provide what they need, at that very instant of need. As discussed in the first post of this...(read more)

Dell TechCenterHow to Build Accurate Schema Deployment Scripts - New videos on Toad for Oracle Xpert Edition

What are you using to compare and synchronize database schemas? DBAs need to be confident that deployment scripts accurately reflect the changes that were intended to come from development. The need for accurate scripts becomes even more urgent in companies moving toward Continuous Deployment.

We saw an ideal way to build those functions into Toad for Oracle Xpert Edition with Compare Schema & Sync. In fact, we’ve added a new feature, Compare Multiple Schemas & Sync, especially useful for DBAs who have to maintain the accuracy of multiple schemas between two databases.

  1. Open Compare Schema from the Toad menu.
  2. Select Source Schema by name (e.g., “prod”), then one or more target schemas by name (e.g., “test”).
  3. Optionally select a Toad snapshot file, which represents a schema in a certain state at a given point in time.
  4. Select the types of objects to compare and click Run.

Watch our John Pocknell demonstrate how to compare schemas in this Toad Xpert Edition video (3 minutes):

(Please visit the site to view this video) 

The results show the differences between the schemas, such as any objects present in one and absent from the other. Toad can also generate a synchronization script for applying the changes to the target database.

Next steps

This is the final post in my 5-part video series designed to help you decide whether Toad for Oracle Xpert Edition is right for you. If the possibility of inaccurate schema deployments is keeping you up at night, have a look at Reason #5 in our updated technical brief, “Five Ways Toad Xpert Edition Can Help You Write Better Code,” for more details on comparing schemas.

And, if you want to compare and synchronize a few hundred schemas in your own databases, download a 30-day trial and take Toad Xpert Edition for a test drive.

Dell TechCenterOpening the Door to OpenStack

In most discussions about enterprise private clouds, an OpenStack solution will be part of the mix. Some early success stories help make OpenStack appear to be a desirable and viable option. For example, a glance at the agenda for the April 21st OpenStack conference in Melbourne promises case studies showing how Time Warner Cable and GoDaddy leveraged OpenStack as an enterprise private cloud platform.  

Dell, of course, can cite OpenStack successes such as deployments at the University of Alabama at Birmingham and the National Institute of Informatics in Japan, and we have been using OpenStack internally for some time now.

Meanwhile, analyst firm Gartner issued a report earlier in February asking the question, "Is OpenStack Ready for Mainstream Private Cloud Adoption?" (the answer is that it depends on the needs of the particular enterprise) and the research firm 451 Research projected a few months ago that OpenStack revenue will pass the $1 billion mark this year.

The right platform for your enterprise’s private cloud will likely be different from the right platform for others, but if OpenStack is on your short list, here are two key considerations to keep in mind. First, if you go down the OpenStack path, it’s important to engage with a provider that has specific and deep OpenStack expertise. Sounds obvious, but in practice finding a provider with a demonstrable, documented track record of success in OpenStack requires some diligence.

Second, choosing OpenStack does not necessarily put you on the hook for attracting and retaining OpenStack private cloud staff; it’s possible and practical to take a hybrid IT approach in which you outsource the day-to-day management of your private cloud, even if it’s on your premises or in your data center.

At Dell, our approach to OpenStack reflects that fact that we have been an OpenStack pioneer. As a result of our long and close engagement with Red Hat, Dell can get a private cloud up and running quickly with certified reference architectures. We also provide comprehensive Managed Cloud Services that can take on the day-to-day management of your cloud, from the hardware to the hypervisor and up to the virtual machines and applications running in them, to ensure that your OpenStack private cloud remains healthy, secure and available.

There’s no one answer to the OpenStack question. But if the answer for your enterprise is yes, Dell can help your OpenStack private cloud succeed.

To learn more about Dell cloud solutions, visit www.dell.com/cloud.  

Dell TechCenterNewest South African Student Team Visits TACC

South Africa's newest student cluster challenge team recently visited TACC to learn more about HPC and gain insights to help bring home a three-peat championship for their country.(read more)

Dell TechCenterGet a practical view of what it’s like to transition to @Microsoft #Office365 with this new @DellSoftware Whitepaper

If you are considering a move from your existing messaging and collaboration environment to Office 365, you must check out our new whitepaper titled, Moving Messaging and Collaboration to Office 365 — It’s Not All Sunshine and Rainbows . Microsoft’s...(read more)

Dell TechCenterMeeting Unique Needs—Verticalizing the Horizontals in IT Services

The IT infrastructures of businesses are a lot like snowflakes—everyone is unique—but, on the whole, it’s still snow. Increasingly, the key to remaining competitive within any given industry lies in the ability of IT service providers to keep the underlying needs of day-to-day operations in balance with the unique needs of particular IT environments. It’s a concept that Dell Services refers to as “verticalizing the horizontals.” It’s also the focus of a recent whitepaper, expressing Dell Services’ position on the customization of IT services across the various industries we serve.  

Context is critical

There’s no doubt, certain elements of IT solutions have become utilities. But unless there’s the appropriate context aligned to that utility, technology advancements are meaningless. That’s where the service provider plays a critical role in the transformation journeys of vertical industries.  

Today’s service providers must understand their customers’ industry in order to fully grasp the importance of the service they provide to their individual customers. This sounds basic, but it isn’t. For example, what does an IT service provider need to know about an insurance company to help them deliver on their business objectives? Turns out—quite a bit! As Deepak Satya, Director—Solutions for Dell Services and author of the position paper—explains:

“Customers are increasingly differentiating service providers that know and understand their unique vertical market needs from those who simply sell commoditized technology solutions to a horizontal or homogeneous set of buyers.”

So service-level agreements (SLAs) are shifting from purely technology-related to SLAs that deliver on business outcomes.

A well-charted path

“It is paramount for service providers to execute a well-chartered path to remain competitive in the future,” Deepak maintains. Here are a few recommendations that service providers, who intend to traverse on that journey, could find helpful:

  • Identify and evaluate options to develop differentiated value propositions
  • Expand horizontal service lines to create vertical-specific processes
  • Differentiate based on service capabilities to deliver superior value

In the future, service providers that successfully verticalize only stand to benefit from business growth, increased revenue and deeper customer loyalty.

 

Cover of whitepaper titled Verticalizing the Horizontals: A Dell PerspectiveAbout the whitepaper

Elements of technology services have become a utility. But without a context associated with these services and an understanding of the impact it will have on end users or the business, the services are meaningless. This whitepaper shows how services can be differentiated and designed to meet the unique needs of customers and their specific verticals. Dell Infrastructure Services align with the varied dynamics of vertical industries, and we refer to this as “verticalizing the horizontals.” Learn more about the changing paradigm of infrastructure services and how Dell’s approach can prepare you for the future.

Download the whitepaper

 

About the author

As the Director—Solutions, Dell Services, Deepak Satya heads the solution center for APJ, EMEA and Healthcare vertical. Having conceptualized and created the solution center hub in India, Deepak has assumed responsibilities for keeping Dell Services and its customers ahead of the curve, by developing new services, identifying transformational opportunities and creating solutions. Prior to joining Dell, Deepak has defined and aligned the services vision with business goals for Wipro Infotech, Wipro Technologies and Cognizant. He has extensive expertise in creating IT infrastructure services strategy, architecture, methodologies, standards and governance and is skilled in establishing and managing high-performance global teams, using blended onshore/offshore delivery models.

Connect with Deepak on LinkedIn

 

Dell TechCenterXC Series 2.0 ships next week: recreating the magic for our customers

Today we’re sharing details of our next generation of XC Series appliances that are based on software from Nutanix, the hyper-converged market leader, according to IDC. As an end-to-end enterprise information technology provider, we offer customers an extensive portfolio of products and services that deliver rapid time to value, superior ease of use and unrivaled flexibility. We’re able to build out a broad and strong portfolio with our own innovation and through tight collaborations with a growing partner ecosystem.

Our partner ecosystem, the broadest in the industry, underscores our commitment to increase the pace of technology innovation and offers a diverse range of solutions. Our relationships with innovative companies such as Microsoft, VMware and Nutanix play an integral role in delivering integrated solutions that lead to successful business outcomes, especially for customers who are implementing new approaches based on converged infrastructure and software-defined architectures.

The Dell-Nutanix relationship is a good example that demonstrates how strategic relationships can accelerate technology development and, in turn, quickly benefit customers. Last June, we announced a collaborative agreement with Nutanix. Working closely together, we integrated Nutanix’s industry leading storage virtualization software with Dell’s advanced server platform in an appliance form factor, the Dell XC Series of hyper-converged appliances. They’re backed by Dell’s global services and support team and first availability was in November 2014.

With XC Series appliances, customers with virtualized workloads, including desktop virtualization, database, private cloud and big data analytics, benefit from a converged compute-storage solution that scales predictably and simplifies the complexity of managing storage for virtualized environments. These appliances can be deployed in 30 minutes or less, and for VDI projects, they cost up to 27% less and have up to 6X faster time to value than white box solutions, according to recent analysis from Wikibon.

The pace of innovation has continued, and today we’re happy to announce the XC Series Appliances version 2.0. These new appliances are based on our 13th Generation PowerEdge server platform and have more processor and storage options for precise workload matching and more granular, pay-as-you grow scalability. We also offer the industry’s first XC appliance with a 1U form factor. It can support more virtual desktops in half the rack space and at a lower cost compared to our initial release. Other models are designed and configured for compute- and storage-intensive workloads, and offer up to 60 percent greater storage capacity per node and a lower starting price per terabyte.

Our server platform has the quality and reliability customers need for high performance virtualized workloads and mission critical applications. So you can expect to see more intense collaboration with Nutanix as our development, sales and marketing teams work closely together to introduce new hyper-converged solutions. At every level of this strategic relationship, we’re focused on developing preconfigured workload-specific solutions that deliver lasting value for our customers.

We are excited about what the new XC series appliances can do for customers and would love to hear your feedback! To stay updated, follow @Dell_Storage on Twitter. 

Dell TechCenterWelcome to the 21st Century of Backup

Thinking like a CIO is not just about bridging gaps between IT and business needs, it can be about bridging the gaps between generations.

I recently read an article in Human Resources Executive Magazine featuring our own vice president of Human Resources, Steve Price, that focused on millennials (aka GenY’ers) in the workplace and Dell’s initiatives to foster future leaders from that, and other, generations. Called Millennials in Charge, the article touched on integration of millenials into a workforce of other generations and how they can all have similar goals and objectives in the workplace, but very different approaches to achieving those goals and objectives. In the world of data protection, this sounds strikingly similar to the gap we have seen between the business side of an organization and the IT side of an organization. And we hear about it from our customers on a fairly regular basis too.

Millennials, as I’m sure you know, are those born between 1980 and 1999. You might be a millennial, be a parent or sibling to one, have one on your work team, or maybe your manager is from this generation. According to this article I read, those of us who came before them (yup, between me and you, I was born before 1980) helped shape millenials into what they are. Some say they’re impatient. They want instant gratification. They have something to prove. (By the way, this actually describes most people I know, millennials or not, but maybe that’s just my circle!)  In the US, they're 80-million strong and will soon represent the majority of the active workforce. That’s pretty significant.

We often hear from our data protection customers that the business wants everything ASAP-- instant responses, 24x7, always connected, etc. And IT is in a different place, often behind the scenes backing everything up, keeping everything up and running--sometimes because it’s unclear what requires the fastest SLA, sometimes because the SLAs are changing constantly or aren’t even identified until there’s a significant data outage. It’s a tricky balance of risk, cost, and service. If IT reduces cost, then most likely risk is increased and service is reduced. If IT focuses on improving service, then both risk and cost increase. The ideal situation would be to reduce cost without increasing risk and without reducing service (actually improving service if possible). Easier said than done.

The gap between business and IT is nothing new, but with more millennials leading in business, the gap could widen. And 21st century backup could have very different requirements. But, with more millenials on the IT side the gap might just begin to close.

In the meantime, you might have some challenges right in front of you today related to protecting your virtual and physical environments. Regardless of what generation your CIO is from, check out Chapter 2: of “Think Like a CIO” which addresses challenges of data protection in the hybrid virtual and physical world.

Dell TechCenterThought leaders in the mix

Our subject matter experts for Statistica and the Information Management Group (IMG) keep busy, staying abreast of current trends with big and small data, predictive software and real-world analytics solutions. And they frequently comment in industry...(read more)

Dell TechCenterTeradici PCoIP Workstation Access Software Untethers Dell Precision Customers

Teradici recently entered the next phase of global rollout for its PCoIP Workstation Access Software, which began with support for Dell’s portfolio of Precision tower and rack workstations in August 2014. In this guest post, their Director of Product Management Olivier Favre looks at how customers are using the software to free their employees from their desks.

***************************

Last year, Teradici® introduced PCoIP® Workstation Access Software for Dell Precision, enabling organizations to mobilize Dell workstation users with a flexible, secure, easy-to-deploy software solution that provides instant access to a rich remote computing experience.

Untethered from their physical workstation, Dell Precision users can now seamlessly connect to their workstation from any compatible end point – whether a laptop, tablet or mobile device – to review, edit and present their work on-the-go, from a conference room, or at home. 

Cator, Ruma & Associates and Hi-Tek Manufacturing were among the first companies to leverage Teradici PCoIP Workstation Access Software for Dell Precision, making it possible for employees to access their workstations and 3D applications from anywhere for the same high-fidelity experience as if they were sitting at their desks.

Cator, Ruma & Associates, a growing engineering services firm with 80 employees – many of whom work remotely on customer sites – required a simple solution that would boost productivity in the field, while being easy for IT to deploy and manage.

With Teradici PCoIP Workstation Access Software, now mobile workers can count on uncompromised visual quality and performance when tapping into applications hosted on dual-monitor workstations at the Cator, Ruma & Associates home office, even under constrained network conditions and/or bandwidth limitations.

“The PCoIP Workstation Access Software gives the engineers exactly what they want: uncompromised graphics quality and excellent performance regardless of their proximity to the desktop workstation,” according to IT Manager, Jacqui Michael

Hi-Tek Manufacturing, a leading manufacturing services provider for industrial gas turbine and aerospace industries with over 200 employees, required a solution that would: dramatically improve CAD workstation reliability in conference rooms, allow IT to quickly deploy new engineering workstations, and enable the company to tap into a broader base of talent beyond the local area.

In conference rooms, workstations were squeezed into audio-visual cabinets lacking adequate airflow and cooling.  Dissipated heat and extreme temperatures in the cabinets resulted in high workstation failure rates, which were costly and disruptive to compute-intensive design review meetings.

By equipping some of their workstations with Teradici PCoIP Workstation Access Software, Hi-Tek was able to remotely host desktop sessions for display on PCoIP Zero Clients in conference rooms. The small footprint and much cooler operation of the Zero Clients solved the problem of the workstation failures.

The addition of Teradici PCoIP Workstation Access Software also enabled Hi-Tek Manufacturing to recruit engineers outside of the local region, and the Dell Precision workstations were simple to set up and easy to manage.

“The PCoIP Workstation Access Software was super easy – much easier than installing a hardware solution. Installation was literally a five-minute process, and the solution was up and running on both sides – host and client,” said Paul Meredith, IT manager at Hi-Tek Manufacturing

These are just a couple of examples that show how Dell and Teradici are enhancing mobility for engineers, architects and designers through the use of instant remote access software.

To learn more, visit PCoIP Workstation Access Software and download the 30-day free trial.

***************************

Photo of Olivier Favre, Director of Product Management at TeradiciOlivier Favre, Director of Product Management, Teradici

Olivier Favre is the director of product management at Teradici, looking after the workstation portfolio and the newly launched PCoIP Workstation Access Software. He is responsible for managing portfolio lifecycle and defining new products that meet the needs of the workstation verticals for remote access. Before moving to the workstation and virtual workspace industry, Olivier held several product management positions in the Telecom sector and has experience working in both North America and Europe. He holds a MBA from Kellogg School of Management, and Master Degree of Engineering from France.

Dell TechCenterNew Collaboration Saving the Lives of Kids with Cancer

A ground-breaking collaboration between TGen, Dell, Intel, and NMTRC is allowing clinicians to quickly treat pediatric patients, and dramatically improve results.(read more)

Dell TechCenterNew Collaboration Saving the Lives of Kids with Cancer

A ground-breaking collaboration between TGen, Dell and Intel, NMTRC is allowing clinicians to quickly treat pediatric patients, and dramatically improve results.(read more)

Dell TechCenterInteractive data analytics drive insights

By Armando Acosta


The Apache Hadoop® platform speeds storage, processing and analysis of big, complex data sets, supporting innovative tools that draw immediate insights.

Big data has taken a giant leap beyond its large-enterprise roots, entering boardrooms and data centers across organizations of all sizes and industries. The Apache Hadoop platform has evolved along with the big data landscape and emerged as a major option for storing, processing and analyzing large, complex data sets. In comparison, traditional relational management database or enterprise data warehouse tools often lack the capability to handle such large amounts of diverse data effectively.

Hadoop enables distributed parallel processing of high-volume, high-velocity data across industry-standard servers that both store and process the data. Because it supports structured, semi-structured and unstructured data from disparate systems, the highly scalable Hadoop framework allows organizations to store and analyze more of their data than before to extract business insights. As an open platform for data management and analysis, Hadoop complements existing data systems to bring organizational capabilities into the big data era as analytics environments grow more complex.

Evolving data needs

Early adopters tended to utilize Hadoop for batch processing; prime use cases included data warehouse optimization and extract, transform, load (ETL) processes. Now, IT leaders are expanding the application of Hadoop and related technologies to customer analytics, churn analysis, network security and fraud prevention — many of which require interactive processing and analysis.

As organizations transition to big data technologies, Hadoop has become essential for enabling predictive analytics that use multiple data sources and types. Predictive analytics helps organizations in many different industries answer business-critical questions that had been beyond their reach using basic spreadsheets, databases or business intelligence (BI) tools. For example, financial services companies can move from asking “How much does each customer have in their account?” to answering sophisticated business enablement questions such as “What upsell should I offer a 25-year-old male with checking and IRA accounts?” Retail businesses can progress from “How much did we sell last month?” to “What packages of products are most likely to sell in a given market region?” A healthcare organization can predict which patient is most likely to develop diabetes and when.

Using Hadoop and analytical tools to manage and analyze big data, organizations can personalize each customer experience, predict manufacturing breakdowns to avoid costly repairs and downtime, maximize the potential for business teams to unlock valuable insights, drive increased revenue and more. [See the sidebar, “Doing the (previously) impossible.”]

Parlaying big data to best advantage

Effective use of big data is key to competitive gain, and Dell works with ecosystem partners to help organizations succeed as they evolve their data analytics capabilities. Cloudera plays an important role in the Hadoop ecosystem by providing support and professional feature development to help organizations leverage the open-source platform.

The combination of Cloudera® software on Dell servers enables organizations to successfully implement new data capabilities on field-tested, low-risk technologies. (See section, “Taking Hadoop for a test-drive.”)

Dell | Cloudera Hadoop Solutions comprise software, hardware, joint support, services and reference architectures that support rapid deployment and streamlined management (see figure). Dell PowerEdge servers, powered by the latest Intel® Xeon® processors, provide the hardware platform.

Solution stack: Dell | Cloudera Hadoop Solutions for big data

Dell | Cloudera Hadoop Solutions are available with Cloudera Enterprise, designed specifically for mission-critical environments. Cloudera Enterprise comprises the Cloudera Distribution including Apache Hadoop (CDH) and the management software and support services needed to keep a Hadoop cluster running consistently and predictably. Cloudera Enterprise allows organizations to implement powerful end-to-end analytic workflows — including batch data processing, interactive query, navigated search, deep data mining and stream processing — from a single common platform.

Accelerated processing. Cloudera Enterprise leverages Hadoop YARN (Yet Another Resource Negotiator), a resource management framework designed to transition users from general batch processing with Hadoop MapReduce to interactive processing. The Apache Spark™ compute engine provides a prime example of how YARN enables organizations to build an interactive analytics platform capable of large-scale data processing. (See the sidebar, “Revving up cluster computing.”)

Built-in security. Role-based access control is critical for supporting data security, governance and compliance. The Apache Sentry system, integrated in CDH, enhances data access protection by defining what users and applications can do with data, based on permissions and authorization. Apache Sentry continues to expand its support for other ecosystem tools within Hadoop. It also includes features and functionality from Project Rhino, originally developed by Intel to enable a consistent security framework for Hadoop components and technologies.

Supporting rapid big data implementations

Dell | Cloudera Hadoop Solutions, accelerated by Intel, provide organizations of all sizes with several turnkey options to meet a wide range of big data use cases.

Getting started. Dell QuickStart for Cloudera Hadoop enables organizations to easily and cost-effectively engage in Hadoop development, testing and proof-of-concept work. The solution includes Dell PowerEdge servers, Cloudera Enterprise Basic Edition and Dell Professional Services to help organizations quickly deploy Hadoop and test processes, data analysis methodologies and operational needs against a fully functioning Hadoop cluster.

Taking the first steps with Hadoop through Dell QuickStart allows organizations to accelerate cluster deployment to pinpoint effective strategies that address the business and technical demands of a big data implementation.

Going mainstream. The Dell | Cloudera Apache Hadoop Solution is an enterprise-ready, end-to-end big data solution that comprises Dell PowerEdge servers, Dell Networking switches, Cloudera Enterprise software and optional managed Hadoop services. The solution also includes Dell | Cloudera Reference Architectures, which offer tested configurations and known performance characteristics to speed the deployment of new data platforms.

Cloudera Enterprise is thoroughly tested and certified to integrate with a wide range of operating systems, hardware, databases, data warehouses, and BI and ETL systems. Broad compatibility enables organizations to take advantage of Hadoop while leveraging their existing tools and resources.

Advancing analytics. The shift to near-real-time analytics processing necessitates systems that can handle memory-intensive workloads. In response, Dell teamed up with Cloudera and Intel to develop the Dell In-Memory Appliance for Cloudera Enterprise with Apache Spark, aimed at simplifying and accelerating Hadoop cluster deployments. By providing fast time to value, the appliance allows organizations to focus on driving innovation and results, rather than on using resources to deploy their Hadoop cluster.

The appliance’s ease of deployment and scalability addresses the needs of organizations that want to use high-performance interactive data analysis for analyzing utility smart meter data, social data for marketing applications, trading data for hedge funds, or server and network log data. Other uses include detecting network intrusion and enabling interactive fraud detection and prevention.

Built on Dell hardware and an Intel performance- and security-optimized chipset, the appliance includes Cloudera Enterprise, which is designed to store any amount or type of data in its original form for as long as desired. The Dell In-Memory Appliance for Cloudera Enterprise comes bundled with Apache Spark and Cloudera Enterprise components such as Cloudera Impala and Cloudera Search.

Cloudera Impala is an open-source massively parallel processing (MPP) query engine that runs natively in Hadoop. The Apache-licensed project enables users to issue low-latency SQL queries to data stored in Apache HDFS™ (Hadoop Distributed File System) and the Apache HBase™ columnar data store without requiring data movement or transformation.

Cloudera Search brings full-text, interactive search and scalable, flexible indexing to CDH and enterprise data hubs. Powered by Hadoop and the Apache Solr™ open-source enterprise search platform, Cloudera Search is designed to deliver scale and reliability for integrated, multi-workload search.

Changing the game

Since its beginnings in 2005, Apache Hadoop has played a significant role in advancing large-scale data processing. Likewise, Dell has been working with organizations to customize big data platforms since 2009, delivering some of the first systems optimized to run demanding Hadoop workloads.

Just as Hadoop has evolved into a major data platform, Dell sees Apache Spark as a game-changer for interactive processing, driving Hadoop as the data platform of choice. With connected devices and embedded sensors generating a huge influx of data, streaming data must be analyzed in a fast, efficient manner. Spark offers the flexibility and tools to meet these needs, from running machine-learning algorithms to graphing and visualizing the interrelationships among data elements — all on one platform.

Working together with other industry innovators, Dell is enabling organizations of all sizes to harness the power of Hadoop to accelerate actionable business insights.

Joey Jablonski contributed to this article.

Doing the (previously) impossible

Apache Hadoop and big data analytics capabilities enable organizations to do what they couldn’t do before, whether that means making memorable customer experiences or optimizing operations.

Personalized content. A digital media company turned to Hadoop when burgeoning data volumes hindered its mission to simplify marketers’ access to data that would let them tailor content to individual customers. The company’s move to Cloudera Enterprise, powered by Dell PowerEdge servers, enabled complex, large-scale data processing that delivered greater than 90 percent accuracy for its content personalization services. Moreover, the 24x7 reliability of the Hadoop platform lets the company provide the data its customers need, when they need it.

Product quality management. To help global manufacturers efficiently manage product quality, Omneo implemented a software solution based on the Cloudera Distribution including Apache Hadoop (CDH) running on a cluster of Dell PowerEdge servers. Using the solution, Omneo customers can quickly search, analyze and mine all their data in a single place, so they can identify and resolve emerging supply chain issues. “We are able to help customers search billions of records in seconds with Dell infrastructure and support, Cloudera’s Hadoop solution, and our knowledge of supply chain and quality issues,” says Karim Lokas, senior vice president of marketing and product strategy for Omneo, a division of the global enterprise manufacturing software firm Camstar Systems. “With the visibility provided by this solution, manufacturers can put out more consistent, better products and have less suspect product go out the door.”

Information security services. Dell SecureWorks is on deck 24 hours a day, 365 days a year, to help protect customer IT assets against cyberthreats. To meet its enormous data processing challenges, Dell SecureWorks deployed the Dell | Cloudera Apache Hadoop Solution, powered by Intel Xeon processors, to process billions of events every day. “We can collect and more effectively analyze data with the Dell | Cloudera Apache Hadoop Solution,” says Robert Scudiere, executive director of engineering for SecureWorks. “That means we’re able to increase our research capabilities, which helps with our intelligence services and enables better protection for our clients.” By moving to the Dell | Cloudera Apache Hadoop Solution, Dell SecureWorks can put more data into its clients’ hands so they can respond faster to security threats than before.

Taking Hadoop for a test-drive

How can IT decision makers determine the best way to capitalize on an investment in Apache Hadoop and big data initiatives? Dell has teamed up with Intel to offer the Dell | Intel Cloud Acceleration Program at Dell Solution Centers, giving decision makers a firsthand opportunity to see and test Dell big data solutions.

Experts at Dell Solution Centers located worldwide help bolster the technical skills of anyone new (and not so new) to Hadoop. Participants gain hands-on experience in a variety of areas, from optimizing performance for an application deployed on Dell servers to exploring big data solutions using Hadoop. At a Dell Solution Center, participants can attend a technical briefing with a Dell expert, investigate an architectural design workshop or build a proof of concept to comprehensively validate a big data solution and streamline deployment. Using an organization’s specific configurations and test data, participants can discover how a big data solution from Dell meets their business needs.

For more information, visit Dell Solution Centers

Revving up cluster computing

The expansion of the Internet of Things (IoT) has led to a proliferation of connected devices and machines with embedded sensors that generate tremendous amounts of data. To derive meaningful insights quickly from this data, organizations need interactive processing and analytics, as well as simplified ecosystems and solution stacks.

Apache Spark is poised to become the underpinning technology driving the analysis of IoT data. Spark utilizes in-memory computing to deliver high-performance data processing. It enables applications in Hadoop clusters to run up to 100 times faster than Hadoop MapReduce in memory or 10 times faster on disk. Integrated with Hadoop, Spark runs on the Hadoop YARN (Yet Another Resource Negotiator) cluster manager and is designed to read any existing Hadoop data.

Within its computing framework, Spark is tooled with analytics capabilities that support interactive query, iterative processing, streaming data and complex analytics such as machine learning and graph analytics. Because Spark combines these capabilities in a single workflow out of the box, organizations can use one tool instead of traditional specialized systems for each type of analysis, streamlining their data analytics environments.

Learn More:

Hadoop Solutions from Dell

Dell Big Data

Hadoop@Dell.com 

Dell TechCenterAdaptable architecture for workload customization, part 2

By Paul Steeves


An adaptable IT infrastructure is critical in helping enterprises match specific workload requirements and keep pace with advances in computing technology.

The Dell PowerEdge FX architecture enables IT infrastructures to be constructed from small, modular blocks of computing resources that can be easily and flexibly scaled and managed.

PowerEdge FD332 storage block: Dense and flexible DAS

The densely packed PowerEdge FD332 storage block allows FX-based infrastructures to rapidly and flexibly scale storage resources. The PowerEdge FD332 is a half-width, 1U module that holds up to sixteen 2.5-inch hot-plug Serial Attached SCSI (SAS) or SATA SSDs or HDDs. The PowerEdge FD332 is independently serviceable while the PowerEdge FX2 chassis is operating.

FX servers can be attached to one or more PowerEdge FD332 blocks. For each storage block, a server can either attach to all 16 drives or split access and attach to 8 drives separately.

This flexibility lets administrators combine FX servers and storage in a wide variety of configurations to address specific processing needs. For example, three PowerEdge FD332 blocks can provide up to 48 drives in a single 2U PowerEdge FX2 chassis — while leaving one half-width chassis slot to house a PowerEdge FC630 for processing. This flexibility results in 2U rack servers with massive direct-attach capacity, enabling a pay-as-you-grow IT model.


Two half-width PowerEdge FD332 storage blocks and two PowerEdge FC630 server blocks in a PowerEdge FX2 chassis

FN I/O aggregators: Effective network consolidation

Designed to simplify cabling for the PowerEdge FX2 chassis, the FN I/O aggregators offer as much as eight-to-one cable aggregation, combining eight internal server ports into one external cable. The I/O aggregators also optimize east-west traffic communication within the chassis, helping greatly increase overall performance by accelerating virtual machine migration and significantly lowering overall latency.

The FN I/O aggregators include automated networking functions with plug-and-play simplicity that give server administrators access-layer ownership. Administrators can quickly and easily deploy a network using the simple graphical user interface of the Chassis Management Controller (CMC) or perform custom network management through a command-line interface. The I/O aggregators also are able to provide Virtual Link Trunking as well as uplink link aggregation.

The FN I/O aggregators support Data Center Bridging (DCB), Fibre Channel over Ethernet (FCoE) and Internet SCSI (iSCSI) optimization to enable converged data and storage traffic. By converging I/O through the aggregators, it is possible to eliminate redundant SAN and LAN infrastructures within the data center. As a result, cabling can be reduced up to 75 percent when connecting server blocks to upstream switches like the Dell Networking S5000 10/40GbE unified storage switch for full-fabric Fibre Channel breakout. Moreover, I/O adapters can be reduced by 50 percent.

PowerEdge FN410s. The FN I/O aggregator provides four ports of small form-factor pluggable + (SFP+) 1/10GbE connectivity and eight 10GbE internal ports. With SFP+ connectivity, the I/O aggregator supports optical and direct-attach copper (DAC) cable media.


PowerEdge FN410s: Four-port SFP+ I/O aggregator

PowerEdge FN410t. The FN I/O aggregator offers four ports of 1/10GbE 10GBASE-T connectivity and eight 10GbE internal ports. The 10GBASE-T connectivity supports cost-effective copper media with maximum transmission distance up to 328 feet (100 meters).

PowerEdge FN410t: Four-port 10GBASET-T I/O aggregator

PowerEdge FN2210s. Providing innovative flexibility for convergence within the PowerEdge FX2 chassis, the PowerEdgeFN2210s delivers up to two ports of 2/4/8 Gbps Fibre Channel bandwidth through N_Port ID Virtualization (NPIV) proxy gateway (NPG) mode, along with two SFP+ 10GbE ports. It also can be reconfigured with four SFP+10GbE ports through a reboot.

PowerEdge FN2210s: Four-port combination Fibre Channel/Ethernet I/O aggregator

NPG mode technology enables the PowerEdge FN2210s to use converged FCoE inside the PowerEdge FX2 chassis while maintaining traditional unconverged Ethernet and native Fibre Channel outside of the chassis. To converged network adapters, the PowerEdge FN2210s appears as a Fibre Channel forwarder, while its Fibre Channel ports appear as NPIV N_ports or host bus adapters to the external Fibre Channel fabric. This capability allows for connectivity upstream to a Dell Networking S5000 storage switch or many widely deployed Fibre Channel switches providing full fabric services to a SAN array; the PowerEdge FN2210s itself does not provide full Fibre Channel fabric services.

Read More:

Dell TechCenterHow to Auto-Optimize SQL Code - New videos on Toad for Oracle Xpert Edition

What if you could automatically tune your SQL in as few as two clicks? Think of how quickly you could improve performance and how easily you could learn to write better SQL.

We built the Auto Optimize function into Toad for Oracle Xpert Edition to simplify the very common task of rewriting SQL. Auto Optimize automates the process of evaluating hundreds or thousands of rewrites to your SQL statements and presents the best candidate alongside of the original so you can see the difference in performance.

  1. Choose the portion of SQL on which you want to run Auto Optimize, then set parameters.
  2. Click OK.
  3. In seconds, Auto Optimize rewrites your SQL and executes as many alternatives as you’ve specified.

Auto Optimize rates the performance of each rewrite by time elapsed, CPU cycles, I/O and more than a dozen other metrics. Each rewrite has a unique execution plan, and the best performing rewrite appears alongside your original SQL so you can study it.

Watch this 3-minute video of our John Pocknell demonstrating how to use Auto Optimize:

(Please visit the site to view this video)

Auto Optimize makes it easy to tune SQL, a task that can consume a lot of a database developer’s time.

Next steps

This is part 4 of my 5-part video series designed to help you decide whether Toad for Oracle Xpert Edition is right for you. If you want to find out more about Auto Optimize, have a look at Reason #1 in “Five Ways Toad Xpert Edition Can Help You Write Better Code,” our updated technical brief.

Want to see for yourself how Toad can help you tune your SQL in just a couple of clicks? Download a 30-day trial and take Toad Xpert Edition for a test drive.

Dell TechCenterCapitalize on mobile moments to increase workforce productivity, part 1

Maybe your typical morning scenario begins with an alarm from your smartphone. You might then use the smartphone to turn up your home thermostat. A quick peek at your weather app confirms that it’s frigid this morning. As you head out the door in...(read more)

Dell TechCenterAnypoint Systems Management e-Book — Nailing Systems Management Basics First

In creating our e-book, “A Single Approach to Anypoint Systems Management,” we looked at the baseline systems management tasks that IT professionals spend the day performing in their traditional endpoint world:

  • Managing hardware and software assets — Inventory, reporting, technology specification, desktop configuration
  • OS deployment and management — Deploying and managing system images for hundreds or thousands of computers on different platforms
  • Application updates — Software distribution and installation of service packs, updates, hotfixes and other digital assets to PCs and servers
  • Patching and configuration enforcement — Performing patching to ensure the latest updates, assessing and addressing vulnerabilities to block potential exploits
  • Service desk — Troubleshooting and resolving user issues quickly to keep workers productive

Of course, those are just the basics. And they add up to no small challenge, to be sure, especially in the face of limited budgets and staffs.

How are you accomplishing these systems management tasks? Are you using spreadsheets? Manual processes? One-off point solutions? Can you see at a glance all of the endpoints on your network? Can you control them and perform most of those baseline systems management tasks on them?

You’ve got to walk before you can run, and you’ve got to handle all of those basics before you can move on to mobile. You might be able to add a mobile device management (MDM) product to your IT toolbox, but if it’s not integrated, you’re asking for more complexity and new cost structures for support, infrastructure and training.

But ready or not, the advent of mobile, BYO and intelligent connected devices is upon you. Every device in your organization must be identified, managed and secured. To keep up in the anypoint systems management world, you’ll need to bring all of your devices under a single systems management solution.

Have a look at Part 1 of our new e-book for more on nailing your systems management basics, to get yourself ready to add mobile, BYO and new connected devices into the systems management mix.

Dell TechCenterDell Executives Honored as CRN Channel Chiefs

The annual CRN Channel Chiefs is one of the highest honors in the channel industry. The award is described by CRN as a “who's who of channel management” who are “navigating a maze of business model and technology shifts and trying to make sure their company's partners succeed.” Today we are proud to announce that four Dell executives were recognized as 2015 Channel Chiefs, including (left to right):

Dell Channel Chiefs Cheryl Cook, Frank Vitagliano, Jim DeFoe, Marvin Blough

  • Cheryl Cook, Vice President, Global Channels, Dell - Cheryl is responsible for ensuring a consistent and coordinated approach to Dell’s Channel Partners, Strategic ISV and OEM customers and partners. Since Cheryl assumed leadership of Dell’s channel in November 2013, she has been focused on adding value to the PartnerDirect program and providing the technology, tools and strength of the Dell brand to make it easier for partners to bring innovative IT solutions to customers.
  • Frank Vitagliano, Vice President, Global Partner Strategy and Programs, Dell - Frank is responsible for Dell’s overall partnering strategy including Partner Programs and Distribution. In addition, he is the vice chairmen of the Board of Directors for COMPTIA. Previously, Frank was honored with a Lifetime Achievement Award by VAR Business Magazine and was the first vendor executive inducted into the Ingram Micro Venture Tech Networks Hall of Fame.
  • Jim DeFoe, Vice President, Global Commercial Channels Sales and Programs, Dell - Jim is responsible for creating and delivering sales with all channel partners in North America. His sales leadership and key contributions have been instrumental in the growth of Dell channel sales since the inception of the PartnerDirect program. Overall, Jim has spent the past 18 years in indirect sales with Dell.
  • Marvin Blough, Vice President, Software Channels and Alliances, Dell - During his tenure at Dell Software, Marvin has managed the integration of seven channel programs from the time of acquisition and led the transformation of Dell Software from a primarily direct model to a model that is now more than 60% sold via the channel. He also leads the Managed Service Provider program for Dell Software.

Congratulations to Cheryl, Frank and Jim who also were named as well on CRN’s 50 Most Influential Channel Chiefs List, honoring today's most influential movers and shakers in the channel.

Contributing to the Dell channel program’s success were a slew of enhancements that started in January 2014, when Dell rolled out new initiatives within the PartnerDirect program to strengthen its commitment to the channel, including a 20% internal compensation accelerator on key solutions for new customers sold through channel partners. This resulted in over 12,000 new customers and more than 20,000 new orders.

However, none of the above would be possible had it not been for the hard work all the teams put into new best business practices that were installed in order to achieve tighter integration with our partner network.

Our distributors have been key to our growing success in the channel and they will continue to be a core to a successful omni-channel strategy. Dell Solutions are now available via three of the top 5 distributors worldwide, and Dell is experiencing double digit growth in distribution in 2014. As the saying goes, and providing partners with choice on how to do business with us has also been important. Distributors do a great job providing our partners with availability, credit, training and support, thus expanding Dell’s overall reach and coverage to our partner base.

Dell also expanded the competencies program to help drive partner success and customer satisfaction – and results show that Dell partners with competencies grow four times faster than partners without competencies. Focusing our efforts on partner enablement has significantly enhanced our partner capabilities, improved their customer satisfaction, and increased partner loyalty. Helping our partners grow and thrive speaks volumes – our top 100 North American partners by training investment were able to grow their revenue by almost 40% year over year.

We continue to make key investments in training for partners that included expanding the existing nine competencies available to also include advanced competencies in Storage and Identity and Access Management, Core Client Solutions and Workstation and expanding the desktop virtualization competency. We saw partner competency completions increase by over 25% year over year with particularly strong growth across Software (Data Protection, Client Management, Network Security).

It’s been a great year for the Dell channel program, and in turn for so many of our partners. We applaud CRN for recognizing a few of our thousands of partners around the globe as 2015 CRN Channel Chiefs, including outstanding executives from some of our distributors (Ingram Micro, SYNNEX Corporation, and Tech Data) and many other Dell channel partners!

We look forward to continuing Dell PartnerDirect’s growth and increasing our commitment to partners by developing a wide-ranging set of new campaigns to drive predictability, profit and opportunity for the indirect channel worldwide in 2015 and beyond. In closing, if you’re a Dell channel partner and would like to learn more about how Dell can help you and your customers, please email Certified_PRD.

Barton GeorgeUpdate 2: Dell XPS 13 laptop, developer edition – Sputnik Gen 4

Two weeks ago I posted an update on the XPS 13 developer edition mentioning that we were addressing a few issues before we felt the was ready to launch.  Now that we are a little further along, we SputnikScientist2wanted to provide more details.

The main issues that are delaying launch involves the touchpad and a repeating keystroke issue.

Status update around the issues:

  • Working on getting A01 BIOS out (fixes keyboard repeating keystroke issue)
  • We confirmed that A01–with acpi_osi and resetafter kernel parameters–is a workaround that makes the touchpad work smoothly
  • Sound works with above workaround
  • Mario Limonciello on the Sputnik team  submitted a patch upstream to have the resetafter workaround done automatically:  http://lkml.iu.edu/hypermail/linux/kernel/1502.2/02389.html

More details:

  • The acpi_osi kernel parameter: Adding acpi_osi=”!Windows 2013″ as a kernel boot parameter followed by two cold reboots switches the audio to HDA mode (which Linux currently supports) and puts the touchpad into PS/2 mode.
  • The resetafter kernel parameter: Adding psmouse.resetafter=0 as a kernel boot parameter when the touchpad is in PS/2 mode allows the touchpad to not reset (which is actually what happens with this touchpad in PS/2 mode on Windows 7).

Please visit the Dell TechCenter Project Sputnik forum for discussion.

Thanks everyone for your support and we appreciate your patience as we work with our partners to resolve these issues.

Extra-credit reading

Pau for now


William LearaNIST 800-147: BIOS Protection Guidelines

Introducing NIST

imageThe U.S. Federal Government operates the National Institute of Standards and Technology (NIST). NIST’s mission is to “Promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.”  Of special interest to BIOS programmers is NIST’s 800 series of reports covering information technology:

The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems. This Special Publication 800-series reports on ITL’s research, guidance, and outreach efforts in computer security and its collaborative activities with industry, government, and academic organizations.

Why is all this important?  Because the U.S. Federal government follows the standards set by NIST, and in order for companies to sell computers to the U.S. Federal Government, a huge customer, they must meet these standards!

 

Enter NIST 800-147

Which brings us to NIST publication 800-147. 800-147 is meant to provide security guidelines for preventing the unauthorized modification of BIOS firmware.

While BIOS threats are not necessarily new, consider CIH, the transition to UEFI BIOS from Legacy BIOS especially motivated the creation of 800-147.  Now, more than ever before, BIOS is a tempting target because all its interfaces are standardized.  In Legacy BIOS, trying to write malware that exploited both Dell, Compaq, and IBM (et al.) BIOS implementations was impractical since there was little standardization as to how these vendors’ systems  worked.  (don’t believe me?  then check out Ralf’s Brown’s famous list and see how much vendor-specific variation there is between “standard” software interrupts!)  UEFI provides the standardization that makes the job of malware authors easier—write once, infect everywhere.

Moreover, the system BIOS is an especially attractive target for attack. Malicious code running at the BIOS level has a great deal of control over the computer. It could be used to compromise components loaded later in the boot process, including SMM code, the boot loader, hypervisor, and operating system. Since BIOS is stored in NVRAM, malware written into a BIOS could be used to re-infect machines even after new operating systems have been installed or hard drives replaced. Because the system BIOS runs early in the boot process with very high privileges on the machine, malware running at the BIOS level may be very difficult to detect. Because the BIOS loads first, there is no opportunity for anti-malware products to authoritatively scan the BIOS.  Therefore, NIST is interested in protecting BIOS as much as possible.

 

Recommendations of 800-147

The 800-147 report is all about specifying a secure BIOS update mechanism. A secure BIOS update mechanism includes:

  • a process for verifying the authenticity and integrity of BIOS updates
  • a mechanism for ensuring that the BIOS is protected from modification outside of the secure update process.

The recommendations of 800-147 discuss the following four recommendations:

  1. An authenticated BIOS update mechanism, where digital signatures prevent the installation of BIOS update images that are not authentic.
    • The authenticated BIOS update mechanism employs digital signatures to ensure the authenticity of the BIOS update image. To update the BIOS using the authenticated BIOS update mechanism, there shall be a Root of Trust for Update (RTU) that contains a signature verification algorithm and a key store that includes the public key needed to verify the signature on the BIOS update image.
    • The authenticated update mechanism should prevent the unauthorized rollback of the BIOS to an earlier authentic version that has a known security weakness.
  2. Integrity protection features, to prevent unintended or malicious modification of the BIOS outside the authenticated BIOS update process.
    • To prevent unintended or malicious modification of the system BIOS outside the authenticated BIOS update process, the RTU and the system BIOS shall be protected from unintended or malicious modification with a mechanism that cannot be overridden outside of an authenticated BIOS update.
    • The authenticated BIOS update mechanism shall be protected from unintended or malicious modification by a mechanism that is at least as strong as that protecting the RTU and the system BIOS.
  3. Non-bypassability features, to ensure that there are no mechanisms that allow the system processor or any other system component to bypass the authenticated update mechanism.
    • The authenticated BIOS update mechanism shall be the exclusive mechanism for modifying the system BIOS, absent physical intervention through the secure local update mechanism. (below)
  4. An optional “secure local update” mechanism, where physical presence authorizes installation of BIOS update images outside the authenticated update mechanism.
    • BIOS implementations may optionally include a secure local update mechanism that updates the system BIOS without using the authenticated update mechanism.
    • A secure local update mechanism shall ensure the authenticity and integrity of the BIOS update image by requiring physical presence.

 

Conclusion

The NIST 800-147 recommendations are on-target and needed by our industry.  BIOS engineers working on security or the flash update process need to be familiar with 800-147. Appendix A, Summary of Guidelines for System BIOS Implementations is an especially good reference for the recommendations.  Please read the (relatively short) document to see complete details.  Download from NIST here:

http://csrc.nist.gov/publications/nistpubs/800-147/NIST-SP800-147-April2011.pdf

Dell TechCenterUnified Communications Success: The Proof is in the Reporting

As I’ve discussed in my earlier blog posts, organizations invest in unified communications (UC) solution not only to improve communication and collaboration but to reduce costs and complexity. Unfortunately, however, more than half of those organizations...(read more)

Dell TechCenterTeam South Africa Prep for "Three-peat" in Student Cluster Competition

To say that “everything is bigger in Texas” is a cliché, overlooks that clichés are often rooted in truth.

I recently had chance to catch up with several university students who had traveled half-way around the world to Dell’s main campus and I heard them say more than once “I didn’t expect everything to be so big!” They weren’t talking about the square footage of our home state itself, but the high performance computing (HPC) environments they were here to see.

This was no ordinary school field trip. These students were representing the South Africa Center for High Performance Computing and they were here for serious preparation before the 4th HPCAC-ISC Student Cluster Competition (SCC) in Frankfurt, Germany. As a sponsor for Team South Africa, Dell was doing our part to help them prepare.

“We bring them here for a week to meet with our engineering and research teams, discuss performance, tuning, storage and file systems, and give them more insight into HPC best practices,” said Vernon Nicholls, advisor to the team from Dell South Africa.

“The amount of information is amazing, and the people we’ve met have been incredible,” student Jenalea Miller told me when I asked about their time in Texas.

In addition to our own Dell labs where I caught up with the team, they also spent time on The University of Texas (UT) campus at the Texas Advanced Computing Center (TACC). TACC designs and operates some of the world's most powerful computing resources. They are able to boast that 198 research projects from 108 institutions are currently exploring 61 fields of science on their systems.

“The UT campus is really different from our own,” said Nabeel Rajab. “It’s massive in scale, and at TACC we got to see custom-built Dell clusters.”

All of the members of this year’s team are electrical engineering students at University of the Witwatersrand, or Wits, a public research university in Johannesburg. The success of previous South African teams, also sponsored by Dell, in the SCC drew them to explore HPC.

“The Student Cluster Challenge is one of my favorite programs that Dell supports – and has turned out to be a fun and unexpected program to be involved with,” said Christine Fronczak after Team South Africa won in 2013. “Under a theme of inspiring the next generation HPC professional, Dell has sponsored multiple teams in the SCC for several years.”

Team South Africa repeated as the SCC champions at ISC in 2014, besting 10 other teams from seven different countries on their way to claim the overall title. This year’s team didn’t seem to be wasting time worrying about following in their footsteps, though. When I asked about expectations for this trip to Austin, they were all business.

“We wanted to nail down the preliminary design of our cluster, and we made a lot of important decisions,” said James Allingham.

The theme of size came up again as he talked about seeing Stampede, TACC’s supercomputer that recently made headlines for helping astronomers better understand the physics behind galaxy and star formation.

“When you’re talking scale -- that was intense,” Allingham said.

So with their views expanded, but with a lot of hard work still in front of them before the international competition in Germany, I wondered if the team had given thought to their long-term career goals.

“I’ve grown so much in this field that I wouldn’t want to waste it,” Rajab said. “The expert knowledge we’ve gained through this program and the opportunity to meet the experts we have met has really given us a jumpstart.”

It’s something his team mate Rob Clucas would recommend to others, as well.

“Participating in this competition means you get to learn a lot – you learn a lot about the HPC architecture, but also about working in a team,” Clucas said.

They’ve got a lot of intense work ahead of them now as they have returned to South Africa. They will be building, testing and refining their cluster. Then in June they’ll break it all down, carry it to Frankfurt and rebuild it for the competition where they will be scored on the cluster’s general performance, applications that will be run on it and an interview by representatives of the SCC board.

Rajab hopes the results will show the world that South Africa has a lot of talent.

“Though the competition and interacting with the others teams from around our country, I’ve met some very smart people,” he said. “There are lots of really good people that didn’t make this team.”

But, he also had good things to say about Dell’s home town:

“You always hear of Silicon Valley, but there’s a lot of amazing computing happening in Austin.”

 

Pictured left to right: Ari Isaac Croock, Nabeel Rajab, Munira Hussain, systems sr. engineer High Performance Computing at Dell, Robert John Clucas , Jenalea Norma Miller, James Urquhart Allingham, and Sasha Tarryn Naidoo 

Dell TechCenterDon't Miss Tuesday's RFS Webcast: Anatomy of a Data Breach @DellSoftware @Dell_WM

Tuesday, February 24 2 p.m. Eastern / 11 a.m. Pacific We all face it sooner or later. Some very confidential information is disclosed and the pressure is on to find out how it happened and who did it. In this real training for free™...(read more)

Dell TechCenterTurn Opportunity into Impact at Dell Security Virtual Peak Performance 2015

The Dell Security Peak Performance event last October in Orlando was an incredible event.

“Wow, excellent experience at Dell Peak Performance 2014! Learned so much. Lots of great ideas for new business! Lots of great partners eager to help,” one of last year’s attendees said. 

(Please visit the site to view this video)

If you were unable to attend in-person, but curious about what all of the buzz has been about lately I encourage you to attend our upcoming Dell Security Virtual Peak Performance 2015 event for our partners on Wednesday, Feb. 25, 2015. This year will be unforgettable at Dell Security, and we’d be ecstatic for you to be a part the success.

Dell channel partners attending Dell Security Virtual Peak Performance 2015, Feb. 25, from 8 a.m. to 1 p.m. PT will gain valuable insights to help them reach new heights with our award-winning Dell SuperMassive next-gen firewalls, Dell Secure Mobile Access and Dell Email Security solutions. We have twice as many resources dedicated to the channel and 100 percent focused on what is core for your business to grow.

Just this year, Business Solutions Magazine awarded Dell Security with “Best Channel Vendor 2015” in the network security category. I am excited to kick-off the keynote general session with Matt Medeiros, VP & GM, Dell Security and Marvin Blough, executive director of Global Alliances and Channel, Dell Software. Security is still a number one concern for customers and our channel centric support will accelerate your business for a higher impact. This will be followed by informative sessions led by our experts to gain security strategies for end-to-end protection against today’s deadliest cyberattacks:

  • Technical Deep Dives:
    • Dell SonicWALL Email Security on-premise and cloud – What’s next?
    • Dell Network Security – Latest developments SonicOS 6.2.2
    • Dell Secure Remote Access – Enhance your per-App VPN knowledge and SMA 11.0
    • Business Sessions:
      • Enterprise Next-Gen Firewall: Drive Growth and Profitability with Dell SuperMassive 9000 Series
      • What’s New in Secure Wireless?
      • How is Dell SonicWALL Addressing BYOD and Securing Mobile Access?
      • Email Security – Expand Your Services with the New Brand Protection and Encryption Features

This dynamic interactive event will allow you to ask live questions and have a two-way dialogue with our experts as well as be able to access additional resources in a unique virtual environment. We will have this information recorded and available for you access online following the event. I assure you this will be well-worth your time. Here is a partner’s experience from Peak Performance:

Dell partners will have the chance to win a Dell Venue 8- 7000 Series tablet after each presentation. Don’t miss your opportunity to win. Stay tuned and join the conversation in real time via Twitter using #DellPeak and follow @DellSecurity and @DellChannel for the latest updates live online for this virtual event. If you can’t wait until the next week, you can demo our security products online by visiting our Live Demo site. I am so looking forward to hearing from you next week.

Dell TechCenterTop 10 Reports for Your Windows Server Environment

“Have you hugged your Active Directory today?” That wouldn’t make much of a bumper sticker I suppose, but you could say that the health and happiness of your Active Directory were top of mind for us when we packed more than 140 reports...(read more)

Dell TechCenterThe Cloud for Backup and Disaster Recovery: Not Just About Saving Money

What if you could give a company the opportunity to focus more on product innovation and less on running a data center? What if backup, replication and recovery was as simple as booting a virtual machine? What if you could have rock-solid disaster recovery (DR) without a dedicated DR site? Yes, it’s possible. Watch our webcast and find out how.

Cloud-based backup restores operations almost instantly with a recovery time objective (RTO) of minutes. Traditional tape restoration can take a full day — or more. And that data in the cloud is really out of harm’s way, backed up off-site, safe from theft, fire, or any other disaster and it can be stored on servers in multiple locations rather than in just one physical place.

Often presented as the most cost-effective option, cloud-based backup eliminates the need for costly tapes and dedicated servers and makes the most out of disk space. It gives a startup or SME access to enterprise-level IT architecture without an enterprise-level budget. But moving to the cloud is not just about saving money. Most enterprises don’t cite cost-reduction as the main reason. It’s about significantly reducing the possibility of losing data.

AppAssure, when paired with a cloud platform like Microsoft Azure, is the logical centerpiece of a comprehensive disaster recovery process. You can store backup archives directly on Azure and perform item-level recovery from the archive without having to download the archive from the cloud. You can backup and replicate data to multiple targets. This means that an enterprise can create multi-tier backup and disaster recovery solutions using its internal servers, a secondary data center and a cloud data center. You can also reduce the amount of on-staff expertise needed to pull off a DR initiative thanks to cloud-based Disaster Recovery as a Service (DRaaS) offered by companies like Dell partner eFolder.

Ready to begin protecting your data in the cloud? Three things to keep in mind:

1. Whatever your data protection needs, there has never been a better time to modernize backup and data protection thanks to emerging cloud and virtualization technologies. These opportunities to protect data in physical, virtual, and cloud environments are about more than just lower IT-related costs — they can make an IT system more flexible and dynamic, a real factor in business innovation. Creating the right data protection strategy now means more security and more upside in the future.

2. Nail down the difference between business continuance (BC) and DR. Think of BC as disaster prevention. BC means you can keep your business running and guarantee high availability in any (reasonable) event. How? By isolating and containing faults before they unleash a larger event or lead to a disaster scenario. DR, on the other hand, is the process of putting all the pieces back together after an incident that could not — within reason or budget — be contained. It involves rebuilding, recovering, restarting, and resuming business.

3. Don’t treat all data as created equal. To minimize risk, align your data protection strategy with the sensitivity of the data. Not all applications will have the same time sensitivity, so not all data and applications should be protected the same way. That means you can reduce overhead, complexity, and cost by using tiered data protection.

Reducing IT costs is always a priority but the main reason for cloud-based backup software is not the dollars and cents. It’s about being able to focus on what matters most: your products and customers.

Go to our webcast and find out more about AppAssure and cloud-based recovery options (partnered by Microsoft):

 

 

Dell TechCenterEnterprise Reporter: How do I collect more attributes in my discoveries?

In his previous article, Jason explained Why you would want to collect group members in discoveries other than Active Directory discoveries . Today, we will wrap up this series of discussions on discoveries by reviewing how to expand discoveries by collecting...(read more)

Dell TechCenterDell Cloud Marketplace and Dell Cloud Manager Orchestration

This post is a collaboration between Andrey Belik, senior program manager, and James Urquhart, head of product strategy for Dell Cloud Manager.

In November 2014 we announced the public beta of Dell Cloud Marketplace. Since the beta launched, we have seen great interest from our customers, and we are constantly collecting feedback and finding ways to improve our product.

What you may not know is that Dell Cloud Marketplace runs on top of a new service catalog and orchestration engine built into Dell Cloud Manager, the award winning cloud management software product. In fact, Dell Cloud Marketplace is our first iteration on our greater vision. Our initial focus was to introduce a new, simple user experience for launching applications.  However, the feedback we’ve received from our customers continues to inform our vision for the future of this capability.

Applications available through the Dell Cloud Marketplace are built on Docker or traditional cloud server images. We made it very simple to deploy any of the marketplace applications. With just a few clicks, the chosen application is deployed to any supported cloud. The power of Dell Cloud Manager lets us hide all the complexity of application configuration and orchestration from the user.

We also provide a number of features that are critical to organizations that have a large number of users and teams that are building or consuming these applications. There are highly granular access control policies that can be applied across cloud services and platforms. And one unique feature of Dell Cloud Manager is its budget control policies, which allow operators to set budgetary limits on services consumed by individual or groups.

Since the initial public beta announcement of Dell Cloud Marketplace, we have been adding more applications to our catalog and making changes to our product to prepare for the next iteration of our service catalog.  Over the next couple of months we will be releasing new features that will enhance our application catalog and orchestration capabilities for the enterprise, introducing a new era of cloud solutions management. To that end, we’re pleased to announce that in the second quarter of this year, a release of Dell Cloud Manager will deliver a private service catalog.

This feature will allow users to define, deploy and manage custom applications. Application developers will then be able to describe their applications’ topologies with a standards-based meta-format, basically defining how an application is structured and how each component needs to be deployed and/or managed. Cloud Manager will handle deployment and automate many operational tasks, including execution of Chef or Puppet run lists during instance instantiation, using a new generation orchestration engine.

We currently offer Dell Cloud Marketplace functionality for three of the clouds supported by Dell Cloud Manager: AWS, Google Compute Engine, and Joyent. Our engine can make smart choices on each cloud, and configure applications to suit unique cloud features. We solve firewall configurations, SSH Keys, network configuration and more, based on application requirements, cloud nuances and user preferences. The Dell Cloud Manager Private Service Catalog – that name might change, by the way – will not only support all the capability you can find in the Dell Cloud Marketplace, but will also support many additional clouds. For a full list of clouds supported by Dell Cloud Manager today, visit http://www.enstratius.com/clouds.

Dell TechCenterAnypoint Systems Management e-Book – Beyond Managing PCs, Macs and Servers

Do you ever wish you could be like the guy in the photo? Wouldn't it be cool to have a zillion smartphones, tablets, laptops, MacBooks and Chromebooks all around you to play with? Sure it would.

Oh, wait — you’re in IT. Your co-workers are already coming in to work with all kinds of BYO devices like the ones in the photo. You already have a zillion devices around you. So why don’t you have a smile on your face, the way this fellow does?

Because you have to manage and secure them all, that’s why.

In my last post  I introduced you to Eddie Endpoint and mentioned our new e-book, “A Single Approach to Anypoint Systems Management .” This time I’ll describe how the endpoint world is growing beyond the PCs, Macs and servers you and Eddie have known, and gradually becoming an “anypoint” world.

Anypoint management is here. Are you ready?

We sponsored a Dimensional Research survey a couple of months ago and uncovered what over 700 of your colleagues think about managing these anypoints. Do some of these findings ring a bell with you?

  • In addition to traditional computing devices, 96 percent of those surveyed had printing devices, 84 percent had mobile devices and 53 percent had audio-visual devices connected to their networks.

  • More than half of the survey respondents had three or more systems management tools. That makes sense because lots of administrators have to manage their mobile and BYO devices separately from their traditional computing devices, using mobile device management products and the like. Still, 67 percent of those polled wanted to use fewer systems, which makes even more sense.

  • More than 60 percent of the survey participants were sure, or suspected, that there were unknown devices or applications connected to their networks. Nobody in IT likes that, so it’s a wake-up call to start pulling BYO, mobile and other network-connected devices into mainstream systems management products and keep them from falling through the cracks.

But how are you going to bring all of your anypoints into a single systems management umbrella? Stay tuned for more in my next post. Meanwhile, read Part 1 of our new e-book right now.

 

Dell TechCenterSteering through diverse stakeholder demands for enterprise mobility

Providing mobile employees with anytime, anywhere access to enterprise resources can help significantly improve workforce productivity and organizational agility. But to realize the full potential of enterprise mobility, you need to understand the expectations...(read more)

Dell TechCenterHow to Enable Code Quality Assurance - New videos on Toad for Oracle Xpert Edition

It’s not easy to enforce a minimum level of quality in your organization’s code before check-in to version control, but it’s important to try. While application developers have been able to do this for a long time, it hasn’t been a mainstream function of database development environments.

With Toad for Oracle Xpert Edition you can combine the Team Coding feature with the Code Analysis feature I described in my last post and your version control system to ensure good coding policies at check-in and consistency across teams. Team Coding is a collaborative utility that accesses the version control system through the Toad editor. If somebody tries to check in code that does not conform to team or project standards, you can configure Team Coding to deny check-in until the code has been improved.

  1. Edit code and check it in to your version control system through the Toad for Oracle editor.
  2. If any of the code violates the standards you’ve set, you’ll receive a validation error that helps you find and repair the offending code.

Click on this 3-minute video of our John Pocknell demonstrating how to use Team Coding in Toad Xpert Edition:

(Please visit the site to view this video)

The Team Coding/Code Analysis feature can prevent code quality regression from stretching out your development cycles.

Next steps

This is part 3 of my 5-part video series designed to help you decide whether Toad for Oracle Xpert Edition is right for you. If you’ve been following along, you’ve seen scenarios in which it can help you optimize code and improve collaboration.

For more details on Code Tuning, have a look at Reason #3 in “Five Ways Toad Xpert Edition Can Help You Write Better Code,” our updated technical brief.

Want to see for yourself how Toad can help you keep sub-standard code out of your version control system? Download a 30-day trial and take Toad Xpert Editionfor a test drive.

Dell TechCenterWe’re Now Accepting Bitcoin in the UK and Canada

Live from eTail West, we’re excited to announce we’re now accepting bitcoin in the UK and Canada on Dell.com, making Dell the largest merchant to accept bitcoin internationally.

blue bitcoin accepted here logoFollowing our successful US pilot, we’ve decided to bring the world’s most widely used digital currency to our consumer and small business customers in the UK and Canada. We are seeing purchases across our full product and customer spectrum – from software and peripherals to our business PCs and even our largest transaction to date – north of $50,000 for a highly configured PowerEdge server system.

This form of payment is clearly resonating with consumer, small and medium businesses. And now we’re excited to take the choice and flexibility this payment option offers global, maintaining our partnership with Coinbase, a trusted and secure third party payment processor, to make this possible.

When you are ready to make a purchase, simply add the items to your cart and choose Bitcoin as the payment method. Checkout our special video guide to see the bitcoin payment process in action.

“Through the expansion of Bitcoin we’re enabling new levels of convenience for our customers, making it easier for them to do business with Dell,” said Paul J. Walsh, CIO, Dell.

Have questions about Bitcoin? We’ve tried our best to answer some of them in our original blog post here and in our Terms & Conditions, or feel free to leave them in the comments below. 

Image by Francis Storr via Creative Commons (CC BY-SA 2.0)

Dell TechCenterDell Precision Helps Taxa Inc Engineer the Cricket Trailer

NOTE: This is the next installment of Dell Precision’s “Purpose Built” series. Past posts and videos include OXOKenguru and YETI Coolers. ”Purpose Built” will share the stories of ground-breaking design that touch the lives of many people.

Garrett Finney, the CEO and Founder of Taxa Inc., has spent most of his life as a camper – backpacking, hiking, kayaking, and spending time outdoors. But he found that the current trailers and trucks available on the market couldn’t satisfy the camping needs for him and his family. Garrett wanted something that was lightweight, that he could tow with his current vehicle versus requiring a big truck, and sleep his family of four. Being a designer and an architect, he decided to design something himself that would work.

Brian Black, lead industrial designer of TAXA sits at his desk working on a Dell Precision workstation.

As former Senior Architect at the Habitability Design Center for NASA, Garrett was once tasked with working on habitation modules for the International Space Station. Today, he and Brian Black, the Lead Industrial Designer at Taxa Inc., have created a solution for small families who have previously had a hard time going camping due to cost or space limitations. Their invention of the Cricket Trailer, a simple, lightweight, flexible outdoor living space for family camping, has reconceived what camping should be.

A natural extension of Garrett’s experience building small, efficient living quarters for space, the Cricket is inspired by his history at NASA and is in many ways modeled after aerospace design. The Cricket fills the gap between the RV and the tent, but what really differentiates it is its level of flexibility and customization that are built into the trailer.

Engineered using Dell Precision workstations and Dassault Systèmes SOLIDWORKS, the Cricket’s lightweight aluminum design is efficient in terms of space and cost, allowing for much more versatility than your average RV provides.

“The best way technology and software helps us is using different materials that maybe haven’t been used before in our industry,” said Brian Black, Lead Industrial Designer at Taxa Inc. “Our product is very lightweight. It’s very small. But we do pack a lot into that. SOLIDWORKS allows us to really look at a lot of different systems all at once, see how they interact, revise them, and iterate a lot quicker than if we were to just be prototyping in real life.”

“We have reconceived what camping should be and technology lets us get our ideas into production faster, from a napkin sketch to prototype to test article to production, it’s just so much faster than it used to be. And technology helps every step of the way.” said Garrett Finney, CEO and Founder of Taxa Inc.

As a longtime Dell customer, Garrett has owned many Dell products over the years. He and Brian used Dell technology to design both the Cricket and their most recent product, the FIREFLY, which is the company’s second development idea and is much more rugged than the Cricket. FIREFLY is more of a toolbox you can sleep in for government recreational uses such as disaster relief or hunting. Both designs have become extremely popular, and with such high demand for their products, Finney and Black have found that technology also plays a critical role in accommodating increased production rate.

“My company’s focus is on comfortable camping, not a house on wheels, so it’s not about bringing everything but rather about having a really different experience – having an adventure,” said Garrett.

Watch the video to find out more about the technology and story behind Taxa Inc. and the Cricket Trailer.

(Please visit the site to view this video)

Dell TechCenterAdaptable architecture for workload customization, part 1

Paul Steeves is a senior marketing manager for the Dell Enterprise Solutions Group

An adaptable IT infrastructure is critical in helping enterprises match specific workload requirements and keep pace with advances in computing technology.

The Dell PowerEdge FX architecture enables IT infrastructures to be constructed from small, modular blocks of computing resources that can be easily and flexibly scaled and managed.

PowerEdge FX2 chassis: Modular platform

A small, modular foundation for the FX architecture, the flexible and efficient PowerEdge FX2 can be easily customized to fit an organization’s specific computing needs. The PowerEdge FX2 is a standard 2U rack-based platform with shared, redundant power and cooling; I/O fabric; and management infrastructure to service flexible blocks of compute and storage resources. It can be configured to hold half-width, quarter-width or full-width 1U blocks. The switched configuration, the PowerEdge FX2s, supports up to eight low-profile PCI Express (PCIe) Gen 3 expansion slots, which are not included in the PowerEdge FX2.

Part of the shared fabric in the PowerEdge FX2 is reserved for systems management performed through the Integrated Dell Remote Access Controller 8 (iDRAC8) with Lifecycle Controller of each server block. Using this fabric and redundant ports, iDRAC8 can monitor, manage, update, troubleshoot and remediate FX servers from any location — without the use of agents.

Additionally, the PowerEdge FX2 chassis hosts redundant, quad-port pass-through Gigabit Ethernet (GbE) or 10 Gigabit Ethernet (10GbE) I/O modules. Administrators have the option of replacing these modules with FN I/O aggregators that are designed to add more network functionality, simplify physical cabling and reduce the complexity and cost of upstream switching.

Finally, the PowerEdge FX2 chassis helps improve cost-efficiency through its shared cooling and power architecture.

PowerEdge FX2 chassis with eight PowerEdge FC430 servers

PowerEdge FC830 server: Dense, scale-up computing

Providing dense compute and memory scalability and a highly expandable storage subsystem, the PowerEdge FC830 excels at running a wide range of applications and virtualization environments for both midsize and large enterprises. The PowerEdge FC830 is a full-width, four-socket server block that has either eight 2.5-inch drives or sixteen 1.8-inch drives and can access up to eight PCIe expansion slots.

The PowerEdge FC830 provides flexible virtualization with excellent virtual machine density and highly scalable resources for the consolidation of large or performance-hungry virtual machines. In addition, the server can incorporate the use of storage area network (SAN), direct attach storage (DAS) or virtual storage environments.

Individual PowerEdge FC830 server with sixteen 1.8-inch drives

PowerEdge FC630 server: High-performance workhorse

Designed for enterprises looking for high-performance computational density, the PowerEdge FC630 is a powerful workhorse for IT infrastructures. The half-width, two-socket server delivers an exceptional amount of computing power in a very small, easily scalable form factor that includes the latest 18-core Intel® Xeon® processor E5-2600 v3 product family and up to 24 dual in-line memory modules (DIMMs). This means that a 2U PowerEdge FX2 chassis fully loaded with four PowerEdge FC630 servers can be scaled up to 144 cores and 96 DIMMs.

The PowerEdge FC630 offers two internal storage options: an eight 1.8-inch drive configuration and a two 2.5-inch drive configuration, the latter of which supports PowerEdge Express Flash non-volatile memory express (NVMe) PCIe devices. As a result, the PowerEdge FC630 can benefit from the ultrahigh performance and ultralow latency of those devices and also participate as a cache provider in Dell Fluid Cache for SAN infrastructures.

The PowerEdge FC630, like the PowerEdge FC830, takes advantage of other innovations such as PowerEdge Select Network Adapters, Switch Independent Partitioning, fail-safe hypervisors and Dell OpenManage agent-free systems management.

Four half-width PowerEdge FC630 servers in a PowerEdge FX2 chassis

PowerEdge FC430 server: High-density processing

Offering outstanding shared infrastructure density, the PowerEdge FC430 is an excellent choice for data centers where a large number of computational nodes, or virtual machines, are needed to run midtier applications. The quarter-width, two-socket PowerEdge FC430 is designed with the right balance of performance, memory and I/O to deliver the necessary resources to many client applications in a highly efficient fashion.

Accommodating up to eight DIMMs and powered by the Intel Xeon processor E5-2600 v3 product family that supports up to 14 cores, the PowerEdge FC430 provides a tremendous amount of processing resources in an ultrasmall space. By fully loading the PowerEdge FX2 with eight PowerEdge FC430 servers, administrators can scale up to 224 cores and 48 DIMMs.

For caching and storage, each PowerEdge FC430 has one of two configurations: two internal 1.8-inch Serial ATA (SATA) solid-state drives (SSDs) and access to a PCIe Gen 3 expansion slot in the PowerEdge FX2s chassis, or one 1.8-inch SSD with a front-access InfiniBand mezzanine card connection that bypasses the PCI switch. These configurations provide low latency and high throughput for environments like high-performance computing and high-frequency trading. The PowerEdge FC430 supports networking with either a dual-port GbE or a dual-port 10GbE LAN on Motherboard.

Individual PowerEdge FC430 server with two 1.8-inch drives

PowerEdge FM120x4 microserver block: Cost-effective scalability

Built for workloads that prioritize scale-out density and power efficiency over performance, the PowerEdge FM120x4 merges four single-socket PowerEdge FM120 microservers on a single, half-width sled. This design contributes to the microserver block’s impressive density, which is enabled by the innovative system-on-chip (SOC) design of the Intel® Atom™ processor C2000.

A fully loaded, 2U PowerEdge FX2 chassis can host 16 individual microservers, each with two DIMMs and either one 2.5-inch front-access hard disk drive (HDD) or two 1.8-inch SSDs. By using the maximum number of eight-core processors, administrators can add 160 cores and 48 DIMMs to an IT infrastructure with each 2U FX system.

The entry-level PowerEdge FM120x4 is especially well suited to scale-out web services. The Intel Atom processor is engineered to consume very little energy, leading to reduced operating costs, and its SOC design minimizes footprint for added space savings.

Four half-width PowerEdge FM120x4 servers in a PowerEdge FX2 chassis

Read More:

Kevin Oliver and John Abrams contributed to this article.

Dell TechCenterModular building blocks for a flexible data center

Paul Steeves is a senior marketing manager for the Dell Enterprise Solutions Group

An adaptable IT infrastructure is critical in helping enterprises match specific workload requirements and keep pace with advances in computing technology.

Enterprise computing needs are dynamically changing as business and technology leaders embrace strategic computing innovations to create novel opportunities and gain competitive advantage. The ever-increasing demand for cloud, the exponential expansion of enterprise mobility, the widespread adoption of big data initiatives and the rise of software-defined infrastructures: All these factors drive IT decision makers to evaluate fresh approaches in the data center.

Many IT leaders are looking to adopt the latest application workload paradigms that industry leaders are pioneering. Wherever possible, they want to gain the economic advantages that scale-out technologies have achieved for cloud providers.

Scalable architecture

To address the challenges introduced by the latest computing demands, the Dell PowerEdge FX converged architecture is designed to give enterprises the flexibility to tailor the IT infrastructure to specific workloads — and the ability to scale and adapt that infrastructure as needs change over time.

The FX architecture is based on a modular, building-block concept that makes it easy for enterprises to focus processing resources where needed. This concept is realized through the PowerEdge FX2 chassis, the foundation of the FX architecture. The PowerEdge FX2 is a 2U rack-based, converged computing platform that combines the density and efficiencies of blades with the simplicity and cost advantages of rack-based systems.

The PowerEdge FX2 houses flexible blocks of server, storage and I/O resources while providing outstanding efficiencies through shared power, networking, I/O and management within the chassis itself. Although each server block has some local storage, the FX architecture allows servers to access multiple types of storage, including a centralized storage area network (SAN) and direct attach storage (DAS) in FX storage blocks or in Just a Bunch of Disks (JBODs).

The FX architecture lets data centers easily support an IT-as-a-service approach because it is specifically designed to fit the scale-out model that the approach embraces. At the same time, its inherent flexibility adds value to existing environments. In data centers of all sizes, the FX architecture enables deployments to be rightsized, efficient and cost-effective.

Easy workload optimization

A key design tenet of the FX architecture is ease of workload optimization. The modular blocks of computing resources and the broad range of components available in the FX architecture let data center operators quickly size infrastructure needs to respective workloads.

Server blocks. The rich set of features that are hallmarks of PowerEdge servers make individual FX server blocks especially flexible in the functionality they can offer. For example, the PowerEdge FM120x4 microserver block addresses the requirements of scale-out computing by optimizing power consumption and footprint. Web services providers can benefit tremendously from the high density, easy manageability and cost-effectiveness that the PowerEdge FM120x4 affords. It is also suited for processing tasks such as batch data analytics.

The PowerEdge FC430 is an excellent option for web serving, virtualization, dedicated hosting and other midrange computing tasks. Its extra-small, quarter-width size is designed to make the PowerEdge FC430 one of the densest solutions in the market. The small, modular form factor of the PowerEdge FC430 enables data centers to host a large number of virtual machines and applications on physically discrete servers, minimizing the impact of potential failures on overall operations. This capability makes the PowerEdge FC430 an outstanding choice for distributed environments that require physical separation for security, regulatory compliance or heightened levels of reliability. It also has an InfiniBand®-capable version that enables low-latency processing.

The PowerEdge FC630 server, with its high-performance processors and large memory capacity, can serve as a strong foundation for corporate data centers and private clouds. It readily handles demanding business applications such as enterprise resource planning (ERP) and customer relationship management (CRM), and it also can host a large virtualization environment.

With support for up to four high-performance processors and exceptionally large memory capacity, the PowerEdge FC830 server is designed to handle very demanding, mission-critical workloads of midsize and large enterprises, whether they are large-scale virtualization deployments, centralized business applications or the database tier of web technology and high-performance computing environments.

Storage block. The PowerEdge FD332 storage block provides dense, highly scalable DAS for most FX infrastructures.2 It is a critical component of the FX architecture, enabling future-ready, scale-out infrastructures that bring storage closer to compute for accelerated processing. When used with pass-through mode, it can support software-defined architectures like the VMware® Virtual SAN™, Microsoft® Storage Spaces and Nutanix® platforms.

The PowerEdge FD332 is excellent for consolidation of environments that require high-performance, scale-out storage, such as Apache™ Hadoop® deployments. It also is well suited for dense virtual SAN (vSAN) environments, providing cost-effective, high-capacity hard disk drives (HDDs) that work with solid-state drive (SSD) caches in the server blocks.

I/O blocks. The PowerEdge FN410s, PowerEdge FN410t and PowerEdge FN2210s I/O blocks provide plug-and-play, network-switch layer 2 functions. These powerful I/O aggregators help simplify cable management while also enabling networking features such as optimized east-west (server-to-server) traffic within the chassis, LAN/SAN convergence and streamlined network deployment.

Advanced enterprise management

The FX architecture supports heightened levels of automation, simplicity and consistency across IT-defined configurations. This enables IT administrators to leverage their past experience with Dell OpenManage systems management tools and maintain the field-tested benefits of comprehensive, agent-free management over the entire platform lifecycle: deploy, update, monitor and maintain. Additionally, the FX platform offers administrators a wide range of systems management alternatives and capabilities.

Administrators can elect to manage FX systems like a rack server — locally or remotely — using the Integrated Dell Remote Access Controller 8 (iDRAC8) with Lifecycle Controller. Or they can manage the servers and chassis collectively in a one-to-many fashion using the innovative Chassis Management Controller (CMC), an embedded server management component. These options enable administrators to easily adopt FX servers without changing existing processes.

Each FX server block’s iDRAC8 with Lifecycle Controller provides agent-free management independent of the hypervisor or OS installed. The iDRAC8 can be used to manage and monitor shared infrastructure components such as fans and power supply units. Any alerts are reported by each server block, just as with a traditional rack server.

Alternatively, these same alerts are routed through the CMC when it is used to manage the FX infrastructure. Administrators also can use the CMC’s intuitive web interface to manage the server blocks through the iDRAC8 with Lifecycle Controller or through platform networking.

In addition, the CMC can monitor up to 20 FX systems at a glance, perform one-to-many BIOS and firmware updates, and maintain slot-based server configuration profiles that update BIOS and firmware when a new server is installed. Each of these abilities helps deliver time savings over conventional management and reduce the risk of human-entry errors by automating repetitive tasks.

Finally, OpenManage Essentials and OpenManage Mobile provide remote monitoring and management across FX and PowerEdge servers as well as Dell storage, networking and firewall devices.

Foundation for a future-ready, agile infrastructure

By creating PowerEdge FX as a flexible converged architecture that can grow and advance with the latest technologies, Dell enables enterprises to deploy IT infrastructure that easily adapts to the ever-shifting business and technology landscape. The foundations IT decision makers invest in today are designed to support the changes they implement tomorrow, giving enterprises the agility to remain competitive in a fast-moving marketplace.

Kevin Oliver and John Abrams contributed to this article.

Learn More

Dell PowerEdge FX 

Dell OpenManage 

1 Dell expects to release the PowerEdge FX2 and the PowerEdge FC630, PowerEdge FM120x4 and FN I/O aggregators in December 2014 and the PowerEdge FC430, PowerEdge FC830 and PowerEdge FD332 in the first half of 2015. 

2 The PowerEdge FD332 block does not support the PowerEdge FM120 microserver. 

Dell TechCenterJoin Us at Strata + Hadoop World in San Jose

Join us at Big Data at Strata + Hadoop World, for compelling presentations and demonstrations from customers and leading Big Data technologists.(read more)

Dell TechCenterMigrator for Notes to SharePoint Video Series

We recently created a new series of short training videos for Migrator for Notes to SharePoint . The videos will help users with their Notes application assessment projects. Many more videos are forthcoming, but for now, we hope you find the videos helpful...(read more)

Dell TechCenterAll the Data, All the Time

At Dell’s recent Big Data 1-5-10 event, I kicked off my introduction by saying my goal is “to help customers use 100 percent of their available data all the time.” This remark caused a few heads to turn, and later prompted Jeff Frick, GM of SiliconANGLE and host of theCUBE live interview show, to ask me for more insight into what he called a “provocative statement.”

Shouldn’t we all be driving toward collecting, analyzing and utilizing data to its fullest? As I explained to Jeff, we’re nowhere near ready to deliver all the data, all the time, but we need to make steps in that direction so we’ll be ready to clear the hurdles and take full advantage of opportunities as they become available.

 map of the earth with zeros and ones to illustrate land mass

Technology is still siloed, unfortunately, which makes it difficult for people to build out all the analytical models today that can deliver answers to their most critical questions. Structured and unstructured information isn’t analyzed together, which creates another barrier to getting one single view of the truth. Another barrier: people doing the analytics address very specific, often narrow areas of focus.

Currently, most companies use only a subset of their data for a very specific purpose. But, you can discover so much more if you step back and take a larger view. For example, instead of only looking at revenue trends over the past 12 months, what could be learned if you look more broadly at the health of your company’s customer base or the social factors driving trends and behaviors that either accelerate or moderate a drop or move in your business?

Delving deeper into the data delivers so much more insight. At the University of Iowa Hospitals and Clinics, for instance, Dell Statistica is used to pull data from a wide variety of data sources to help lower the rate of infection for surgical patients. As reported in the Wall Street Journal’s CIO Journal, the University of Iowa takes information from patients’ medical records, and surgery specifics, such as patient vital signs during operations, to predict which patients are face the biggest risk of infection.

Armed with this valuable insight, doctors can create a plan to reduce the risk by altering medications or using different wound treatments.

Thanks to the evolution of analytics, other organizations will be able to follow University of Iowa’s lead in more fully utilizing their data. We’re at a tipping point—compute cycles now are affordable enough and can keep pace with data proliferation while plentiful bandwidth and cloud services make ubiquitous data access a reality. Today’s infrastructures enable us to do things that weren’t possible five years ago.

While environments now are ready to accommodate a more holistic view and broader conversations about data, most companies are just starting to buy-in conceptually. Sure, companies want access to all their data, all the time, but most folks I speak with see this as an aspirational goal still to be achieved. When it comes to the here and now, they’re pretty pragmatic and taking the first steps to realizing their data’s full potential.

Since focus is the hallmark of success, I recommend putting customers first. Start by taking all the steps you can to get all the data on your customers. Then, gather all the data on your product areas, supply chain, manufacturing, etc. In each respective area, there likely will be a dozen different data sources that are interconnected and interrelated. For instance, in compiling data on customers, you’re likely to encounter exposed interfaces that take you to product, which can be integrated with manufacturing, and so on. It’s kinda like assembling LEGO blocks or deciphering fractal patterns as all the data elements are nested and interwoven.

Another major step is determining how best to empower your data analysts by providing them with the right tools for producing everything from simple reports and visualizations to complex analytics. But don’t stop there. If your data is locked away and only useful for PhD modelers and data scientists, you’ll only solve part of your problems. Getting data into the hands of your subject matter experts and line-of-business decision makers is crucial because they too must be empowered to build their own analytical models.

The day when employees become their own data analyst isn’t too far out on the horizon. Once everyone has access to all the data, all the time, they can create their own hypotheses. Training your employees to think more analytically is something every organization should already be doing to stay ahead of the curve.

What steps are you taking to ensure your company gets the most from all its data, all the time? Drop me a line at john.k.thompson@software.dell.com to exchange ideas on how to unlock the power of your data.

Dell TechCenterWatch Out for Bumps in the Road: The Business Risks of Data Protection

Most traditional data protection strategies are full of unforeseen bumps in the road because they were built for a simpler time – when one server, one tape seemed like the leading edge. But that time is past, it’s in the rearview mirror. You’re dealing with a much more complicated set of scenarios, and you need data protection that overcomes the hidden risks of old approaches. In our latest eBook, we’ve developed a roadmap to help you avoid some of these risks.

Spring is around the corner, and for my family, that means a spring break road trip is in order. We’ll be off to some warmer corner of the country, spending a week visiting the grandparents, getting in some relaxation, and returning home refreshed, ready for school and work and the status quo.

That is, unless we hit another unforeseen bump in the road.

Last year, we were driving through a construction zone at night, and without any real warning, we drove over a damaged section of road. The impact damaged our car. It shook us up. And it threatened to turn our enjoyable trip into a disaster.

Something similar can happen with data protection. One moment, you're driving along, things are smooth, and the next minute, you have problems.

Let’s take a look at what some of these bumps you might face when protecting your data.

Risk #1: For physical and virtual, inconsistent processes = inconsistent results

Let’s imagine a simple scenario, that you have five physical, legacy workloads, and five virtual workloads. You’re using a traditional backup and recovery technique to back up your physical servers while you’re using a virtualization-specific solution to back up your virtual machines.

That’s all well and good if the added complexity of two data protection solutions suits your needs. But it usually doesn’t because with two solutions, you have inconsistent processes and inconsistent results. Inconsistency is bad for business and tends to frustrate everyone – including your executives.

Risk #2: Lengthy time to recovery = business exposure

How quickly do you need to recover? Well, that’s a complicated question of course, but the standard answer is “as quickly as possible.” Problem is traditional data protection approaches don’t lend themselves to a short RTO. Magnetic tape is slow compared with disk. Granular restore isn’t usually an option, so you’re reloading an entire database or file structure from tape just to recover a record or file. In virtual environments, traditional solutions usually force you to read the entire virtual machine back from tape.

These limitations expose your organization to risks because you're compromising important workloads. How will you cope when you can’t recover for days?

Risk #3: Poor granularity = increased time, cost, and risk

Traditional backup approaches tend to have poor backup and recovery granularity. You need a solution that backs up just the changes and is capable of restoring just the changes – instead of an entire email store or file server. Having to restore an entire mail server just to access a single lost mail message is overkill—but it’s something many organizations accept as inevitable because that’s been the status quo for a long time.

The status quo isn’t a good thing for you. You need a data protection approach that’s granular – that can restore a single byte, or file, or record because it protects data intelligently, in a granular way. Without granularity, increasing the time spent on data protection, the risks to your organization, and the costs associated with management, storage, and poor recovery.

Risk #4: Too many administrators, doing too much work.

It’s a sad fact: traditional data protection approaches are incredibly labor-intensive, especially as your organization grows. Your admins are monitoring backups to ensure they complete; shuffling tapes off-site, on-site, and so forth, and dealing with the implications of slow recovery times.

Because of this, you’re devoting far too much energy and money to keeping the status quo alive. Projects that could be beneficial to the business end up unrealized because there aren’t sufficient IT resources. Problems take longer to solve because resources are tied up with daily overhead. Your business suffers.

Risk #5: Lack of recovery flexibility

Let me ask you a question: how flexible is your traditional data protection? Do you have the capability to recover a failed system to a standby server? Recover physical servers to a temporary virtual machine? Recover to a virtual machine in the cloud, or in another datacenter entirely?

Traditional backup solutions don’t facilitate that flexibility. How are you supposed to get your tapes into the cloud? You’ve spent time and money building a flexible, virtual datacenter, and yet your backup barely leverages Internet connectivity.

Risk #6: Massive amounts of at-risk data

Your data is growing exponentially, which likely means that your backup windows are limiting your ability to keep everything backed up. Some data doesn’t get backed up for days or weeks.

How much data can your organization stand to lose? The fact is, most organizations accept massive amounts of at-risk data because that was the best they could do. Here’s a risk that goes beyond a bump in the road – to extend the analogy, the bridge is out and you’re about to drive into the swamp.

How can you leverage leading technologies to avoid these bumps in the road?

The old ways are bumps in the road that could ruin your trip. How can you avoid them?

We’ve produced a roadmap to help. It’s called Think Like a CIO: 5 Key Virtual + Physical Data Protection Takeaways. It explores all the challenges, opportunities, and benefits you could experience by moving away from a traditional data protection approach.

Based on years of experience, both internally and with thousands of customers, it offers our perspective on the best approach to data protection. Download it here to learn the best ways to avoid bumps in the road while traveling toward optimized data protection that’s built for you.

 

Dell TechCenterEmpower Your Business – The ‘Triple A’ Security Approach

Triple-A ratings are normally associated with chief financial officers  keeping a tab on John Moody’s bond credit rating. In the world of IT however, how can a chief information officer or information technology decision maker (ITDM) rate the efficiency of an IT security implementation?

IT security is one of the main concerns for ITDMs with attacks such as Shellshock and Heartbleed and others affecting organisations globally. Therefore ITDMs are taking steps to protect the corporate network from threats of all sizes. However, as it stands security is still at risk from internal and external stand point.

How can ITDMs know when they have reached a level of security that will protect from cyber-attacks while still empowering employees to do their job better? A comprehensive security approach should encompass three factors, it should be adaptive to threats, business requirements and also the ever evolving use of the internet within the corporate network, have adapted to meet the specific requirements of an organisation and have been adopted fully by end users.

These factors can be summarised as a ‘Triple A’ security approach, that could help you with your overall security posture and grant your organisation a ‘Triple A’ security rating.

Adaptive:

IT infrastructures are constantly changing. In the past we had static IT infrastructures, however, we are moving towards a world of convergence. Therefore, security infrastructures need to adapt in order to be effective. An adaptive security architecture should be preventative, detective, retrospective and predictive. In addition, a rounded security approach should be context aware.

Gartner has outlined the top six trends driving the need for adaptive, context aware security infrastructures: mobilization, externalization and collaboration, virtualization, cloud computing, consumerization and the industrialization of hackers.

The premise of the argument for adaptive, context aware security is that all security decisions should be based on information from multiple sources.

Adapted:

No two organisations are the same, so why should security implementations be? Security solutions need flexibility to meet the specific business requirements of an organisation. Yet despite spending more than ever to protect our systems and comply with internal and regulatory requirements, something is always falling through the cracks. There are dozens of “best-of-breed” solutions addressing narrow aspects of security. Each solution requires a single specialist to manage and leaves gaping holes between them. Patchwork solutions that combine products from multiple vendors inevitably lead to the blame game.

There are monolithic security frameworks that attempt to address every aspect of security in one single solution, but they are inflexible and extremely expensive to administer and organisations often find that they become too costly to run. They are also completely divorced from the business objectives of the organisations they’re designed to support.

Instead organisations should approach security based on simplicity, efficiency, and connectivity as these principals tie together the splintered aspects of IT security into one, integrated solution, capable of sharing insights across the organisation.

This type of security solution ensures that the security approach has adapted to meet the specific requirements and business objectives of an organisation, rather than taking a one size fits all approach.

Adopted:

Another essential aspect to any security approach is ensuring that employees understand and adopt security policies. IT and security infrastructure are there to support business growth, a great example of this is how IT enables employees to be mobile, therefore increasing productivity. However, at the same time it is vital that employees adhere to security policies and access data and business applications in the correct manner or else mobility and other policies designed to support business growth, in fact become a security risk and could actually damage the business.

All too often people think security tools hamper employee productivity and impact business processes. In the real world, if users don't like the way a system works and they perceive it as getting in the way of productivity, they will not use it and hence the business value of having the system is gone, not to mention the security protection.

By providing employees with training and guides around cyber security, this should lead to them being fully adopted and the IT department should notice a drop in the number of security risks from employee activity.

Triple A

If your overall security policy is able to tick all of the three A’s, then you have a very high level of security, however, the checks are not something that you can do just once. To protect against threats, it is advisable to run through this quick checklist on a regular basis to ensure that a maximum security level is achieved and maintained at all times.  It is also important to ensure that any security solutions implemented allows your organisation to grow on demand; as Dell says: Better Security, Better Business. 

La Perla had the challenge of managing expansive growth, demand for remote access and minimal learning curve in their organization. The turned to a triple Dell Security Solution, which included Dell SonicWALL NSA Series next-gen firewalls, Dell SonicWALL Secure Remote Access SRA Series, Dell SonicWALL Global Management System (GMS).

“Secure communications and a secure business infrastructure are a priority for our Group and we found that Dell SonicWALL products meet our requirements perfectly,” said Mauro Ruscelli, network security expert at La Perla.

Dell TechCenterNew Release – #Dell Migration Manager for Enterprise Social Beta

We’re looking for customers and partners to help test the beta version of our new product — Migration Manager for Enterprise Social . This solution helps enable organizations to consolidate Jive content to Yammer and: Eliminate the cost...(read more)

Dell TechCenterSharePlex Tips and Tricks webcast: Optimizing Performance of Data Replication Queues

Join us for an informative SharePlex tips and tricks webcast on setting up and optimizing replication queues. Register today: Here Date: Feb. 18, 2015 Time: 10:00 AM -11:00 AM PST Duration: 60 Minutes Event: Online(read more)

Dell TechCenterSoftware-Defined Data Centers for Enterprise and Carriers: Common traits and DNA for the future

In my last blog, I wrote about Open Networking and how it forms the foundation for our Software-Defined Networking (SDN) portfolio at Dell. Open Networking and SDN are playing a significant role in today’s enterprise data centers as customers migrate to Software-Defined Data Center (SDDC) architectures. SDDC employs virtualization technologies to abstract and converge compute, storage and networking resources delivering automated cloud/XaaS functions and services, as shown in figure 1.

Enterprise CIO and IT Managers are making the move to SDDC environments to improve agility and business responsiveness, allowing different application stacks and workloads to be loaded on top as business needs dictate.

Figure 1: Enterprise SDDC stack

However, the adoption of SDDC principles is not limited to enterprise environments. Carriers are also adopting SDDC technologies and architectures as they look to Network Functions Virtualization (NFV) solutions for their provisioned infrastructure and services. In the case of carriers and NFV, the scale of the deployment will vary significantly from small, unstaffed, point-of-presence applications to super-sized hyperscale applications. Also, in the case of carrier NFV deployments there is a preference for OpenStack and open source technologies. Yet despite differences in scale and software, as figure 2 shows, an enterprise SDDC stack and a carrier NFV SDDC stack look remarkably similar.

Figure 2: Carrier SDDC/NFV stack

Interestingly, as the two figures show, both enterprise and carriers are adopting virtualized x86 infrastructure at the core of their software-defined architectures. And while the goals may be different - enterprises are adopting SDDC to improve business responsiveness, carriers are adopting SDDC to improve service agility - there is a significant amount of learning and best-practices that can be shared. This is especially true for large carriers and other organizations that own and operate both enterprise infrastructure, typically governed by the CIO, and provisioned service infrastructure, governed by the CTO. The more that CIO and CTO organizations see the similarities in technology underpinnings, the more they can collaborate, realizing maximum efficiencies in both technology and people. Because in fact, enterprise SDDC and carrier SDDC deployments share common open, server-centric, traits and DNA.

Figure 3: Common traits and DNA for enterprise and carrier SDDC deployments

Interested in learning more about our NFV initiative, visit dell.com/nfv. To stay updated, follow us @DellNetworking on Twitter. 

Dell TechCenterOpen technologies and collaboration = doing cloud right

Editor’s Note: a significant portion of this blog was originally published in Norwegian by Dell’s Espen Schanke. Click here to view original post.  

How often do we experience a true alignment of people’s actions with their words?  I bet most of us would say probably not often enough. The IT media is all abuzz with marketing noise about how cloud technology will transform IT, but they don’t always have concrete examples to point to. Check out this interesting project run by some pretty smart people who understand that by marrying open cloud technology with collaboration across multiple organizations they can truly transform IT service delivery. 

Developing future-ready IT platforms for higher education in Norway

UNINETT operates networks and provides Internet, advanced computing, and technology services for universities, colleges and research institutions in Norway. An important part of UNINETT is its responsibility to examine new technology and explore what technology can provide to the education sector. With the UH-sky project, UNINETT has established a partnership to coordinate a common approach for cloud infrastructure. The intention is not to build a separate IT environment, but act as an advisor for architecture, layout, brokerage and consumption of cloud services in the education sector in Norway.

UNINETT Network Map showing peak traffic

 

Infrastructure as a Service

A critical aspect of the UH-sky project is collaboration involving the largest universities in Norway (University of Oslo, Bergen, Tromsø and NTNU) who are developing a new model where IT infrastructure can be delivered as a service across organizations. This cloud model will be tested as a foundation for future delivery of IT services in the education sector. UH-sky project documentation is available for viewing here. The project is contributing code to opensource via GitHub. As Technical Project Manager Jan Ivar Beddari explains:

The key drivers for the UH-sky project are similar to those highlighted by Gartner for why organizations should offer cloud services. We believe that by drawing on the expertise across multiple Norwegian universities and colleges to develop this new cloud model we can ensure better service quality, become more efficient and accelerate the speed with which ICT services can be delivered. By having ready-made cloud solutions and models it becomes much easier for users to get started with new services.

Software Defined services and networks

Dell was selected as one of the main providers and counselors for this project. The UH-sky platform is being built with Dell PowerEdge 13G servers (R630 and R730xd), Dell switches with Cumulus Linux software, and the Red Hat Enterprise Linux OpenStack Platform.  Software-defined services and networks are a prerequisite, and the project made an active choice to use open source technologies. Dell and Red Hat’s approach and support for software-defined services and the open models was a key decision variable. This notion is supported by Beddari:

Dell and Red Hat are well established suppliers to all the major universities in Norway. We know their expertise and compute and storage solutions well, and are very satisfied having worked with them. As we move forward into the future we are very interested to be working with Dell and Red Hat to explore new software-defined and cloud models and believe Dell and Red Hat have very competent advisors to help us with the UH-sky project.

UNINETT and its UH-sky project provide an excellent example of how Dell actively collaborates with our customers, and this project is directly aligned with Dell’s leadership and support for open standards-based platforms and technologies. Dell will continue to advance open cloud and software-defined technologies with customers like UNINETT and partners like Red Hat, because fundamentally open standards-based solutions are the key for transforming IT systems and operations.

The evolution of Dell Red Hat cloud solutions just took another significant step forward with Red Hat’s release of Red Hat Enterprise Linux OpenStack Platform 6. This release brings improved support for  OpenStack Neutron networking, support for multiple LDAP backend and inter-operability with Red Hat Inktank Ceph storage – a technology Dell and many of our OpenStack cloud customers have strongly embraced.  Our Dell engineering team is finalizing our validation work with Red Hat Enterprise Linux OpenStack Platform 6 now – watch for more news on this front soon.

Click here get additional details on Dell and Red Hat Cloud solutions.

Footnotes