Mark CathcartWonky property taxes

The new Austin 10/1 council is pretty much settled after the run-offs. The new districts will be represented by 9 new Council members, two of whom are Realtors, a new Mayor with zero Council experience, all voted in by an appallingly low voter turnout especially in the runoff elections. In District-3,  Sabino “Pio” Renteria won with just 2,555 votes, a victory by 833 votes… I assume a mere $10,000 could have bought victory for  his opponent by paying locals $10 to go vote. I bet that TV and newspaper advertising looks lame now.

propertytax1One of the flagship, priority subjects will no doubt be property tax. Most candidates had a position on it, almost all want to discount or cut it. Hold on, not so fast. Anyone who actually thinks it through knows, property tax has almost nothing to do with affordability. I can certainly easily pay for my taxes now, but the question is, will I still be able to pay them in 20-years.

That’s certainly the problem most long term residents of the core/downtown neighborhoods face now. They generally live in modest homes, whose lot price and property tax evaluation has gone through the roof. The guy that lived opposite me, 73 when he died in April, couldn’t afford to retire. He was an (arthitic) plumber. He was paying, even with an age discount, more per year than he paid for a mortgage when he bought the property with a small deposit from his mothers estate after she died.

That effect can’t be allowed to continue for many reasons, not least because it is not acceptable. That people who have lived in their home, in what was often a modest property, in a less attractive neighborhood,are now, in their later years, being forced to move. A time when you often are less prepared for change, less able to make new friends, less willing to learn where to go and to get to pretty much everything.

Yes, they can make a ton from selling, and yes, many may chose to do that to move into assisted living, but they shouldn’t be forced to because thy cannot afford property taxes.

The whole issue has become conflated in the usual Tea party, “any tax” is to much rhetoric..  Rather than everyone jump on the bandwagon demanding a property tax cut, and the new 10/1 council lauding it around as having done something important, which the current proposal, clearly isn’t.

Julio Gonzalez has two great posts that show what the current proposal means on his Keep Austin Wonky blog. The Homestead Exemption debate in 2 minutes and 10 bucks or 10,000 homes

Dell TechCenterMaking Life Easier for IT Departments with Enhanced Systems Management Tools

Technology in the workplace is moving at an incredible pace. In fact, our latest Global Evolving Workforce Study and International Tablet Survey show some of the dramatic ways workplaces are changing in lieu of these new technologies.

And while these have opened new doors for employees to be productive in a range of new locations and situations, it also means that IT departments are now busier than ever. This is why we’re continually innovating our Dell Client Command Suite to provide the most manageable solutions in the industry. 

A man sits at a table in a public space using a Dell laptop

Not only do IT professionals have to consider how to incorporate an ever expanding range of devices – and BYOD programs – into their existing infrastructures, but they also have to do so in a way which secures critical business data and allows employees to remain productive.

"They have their own personal apps and their own setup as to what the user interface is and that sort of thing. You don't want to mess with that, but you also want to make sure that the device is secure when it's on your network and utilizing the various applications that you have within your system," Charles Podesta, CIO at UC Irvine Health, recently told HealthLeaders magazine.

That’s not even to mention the need for organizations to maximize existing hardware and software investments, which can often increase in cost over their lifespan due to a variety of factors including keeping PCs patched and up-to-date, plus any diagnosis or repair costs of failures.

We want to help IT leaders like Podesta simplify their jobs and maximize their organization’s investment in technology from day one. With the Dell Client Command Suite, organizations can download the tools they need to help with whichever management task is at hand.

And while manageability solutions can often be hard to put a finger on directly, the video below shows how client command tools can fit into organizations of all sizes:

 (Please visit the site to view this video)

We’ve worked hard to make these solutions as simple and encompassing as possible – so they can manage tasks from configuring 200 new systems for recent hires, to delivering driver patches for 10,000 systems across a national network. These management tasks can often become complex and time consuming, and that’s why we have invested in tools to enable IT admins to not only manage their fleets more simply, but to ultimately be more productive.

Some of the tangible benefits of our Client Command Suite solutions include:

  • Faster Deployments: Organizations can enjoy 77 percent fewer steps (vs. HP/Lenovo) through allowing simple and easy-to-use, free integration with Microsoft Systems Center 2012.
    • The most generations of updates: IT Departments will enjoy full update support over the depreciated lifetime of their fleet. Dell Driver CABs are even supported for five generations of Dell Latitude – the longest period of any hardware provider
    • Exclusive Intel vPro capabilities: Dell can remotely erase a hard drive  in 10 seconds for 100 systems while Lenovo can take almost 3 hours- thanks to our unique vPro capabilities*
    • The only Client PowerShell Provider: Dell is the first OEM to offer a dynamic PowerShell for client hardware management, making BIOS management easier and resulting in a significant reduction in time to create scripts

    With Dell Command | PowerShell Provider, we’re bringing even more simplicity to managing Dell client systems. A few features include modifying SMBIOS setting such as passwords, the TPM token, service and asset tag information. Dell’s PowerShell Provider, is just one more addition to our class-leading suite of manageability solutions helping organizations enable productivity amongst their workforce in a way which doesn’t swamp their IT departments.

    Dell is committed to offering organizations the software solutions to maximize every aspect of their technological infrastructures. Our manageability tools ensure organizations enjoy truly end-to-end solutions, and we believe this a key differentiator for Dell.

    What do you feel is the most important thing we could do to help you better manage Dell client systems and enable more productivity? 

*Based on June 2014 Principled Technologies report commissioned by Dell, “Remote Notebook Management: Dell Extensions Supporting Intel vPro Technology and Dell Integration Pack 3.1”, testing the Dell Latitude 7000 series against legacy Lenovo ThinkPad, where Dell extrapolated results of testing against Lenovo to 100 systems. Actual results will vary. 

Dell TechCenterRegister now for the upcoming SAP webinar

In the age of Internet, mobility, big data and in-memory Analytics it’s no longer a secret that the link between production and IT into an efficient production system has increasingly developed as a success factor for manufacturing companies.

But how can companies make the leap to the new form of value networks? And are there already practical applications and experiences of other manufacturing companies that helps developing an own strategy on the way to become a “Smart Factory”?

Dell and SAP would like to answer these questions during a free one hour webinar. Don’t miss this exciting opportunity to interact with experts. Register today!

Webinar Agenda

  • Overview Dell Shop Floor Services for SAP Solutions – Speaker: Dell
  • Case Studies – Speaker: Dell
  • Paperless manufacturing processes at an electrical components producer (SAP ME)
  • Transparency in the Shop Floor through across locational integration with operator, supervisor and overhead shop floor monitors at a heavy machinery producer (SAP MII)
  • Process integration up to the machines at a composite materials producer (SAP PCo)
  • Emerging trends – Speaker: SAP
    SAP Connected Manufacturing – Update on SAP solutions for Internet of Things
  • Summary with Q&A session

Dell TechCenterKevin Shinpaugh of Virginia Tech Shares his Thoughts on Why HPC Matters

Virginia Tech's Kevin Shinpaugh shares his thoughts on why HPC matters and other issues facing the industry while at SC14.(read more)

Dell TechCenterS&P Upgrade

On Dec. 4, 2014, Standard & Poor's Ratings Services (S&P) upgraded our Corporate and Debt credit ratings:

  • Corporate Rating revised from BB- to BB+ (a two-notch upgrade and one notch below investment grade)
  • Senior Secured debt revised from BB+ to *** (also a two-notch upgrade)
  • Senior Unsecured debt revised from B+ to BB+ (a three-notch upgrade)

We are pleased with this external acknowledgement that we are making progress in our transformation to become the leading provider of end to end IT solutions, especially given we just passed the one-year anniversary of our go-private transaction. 

In S&P’s press release, their rationale for the upgrade included:

  • Expectation that Dell will continue to reduce debt, targeting and maintaining adjusted leverage below 3x.
  • Dell's "fair" business risk profile, which incorporates the company's strong brand name and good market position across its hardware product portfolio, a geographically diverse and broad customer base, highly competitive market conditions, ongoing cost reductions enabling consistent profitability, and a modest mix of services and software revenues with higher growth and margin contribution potential.
  • Prudent balance sheet management

For more information on the S&P upgrade as well as additional agency reports, please visit our Investor Relations site

Dell TechCenterNVMe and You

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

In the article, “The Evolution of Solid State Drives for the Enterprise,” we presented a brief overview of the interface choices available for Solid State Drives (SSDs). Of the three standard choices available, SATA, SAS, and PCIe, it was the PCIe interface that clearly supported the fastest throughput and highest performance numbers. Given the data-intensive requirements and increased performance needs of applications today, it appears certain that more and more organizations will be turning to PCIe SSDs to meet their enterprise storage needs.

The Dawn of NVMe

Within the category of PCI Express (PCIe)-based SSDs, there have been several advancements that have had a significant impact on capacity and performance levels in just the last year.  While it is widely understood that PCIe SSDs utilize non-volatile NAND memory (i.e., that the data held by the drive is persistent and will not be lost in the event that power is abruptly terminated to the drive), not all PCIe SSDs used a standard set of drivers or feature sets - the result being inconsistent drive performance by manufacturer, incompatibility in some systems, etc. 

Dell and Samsung Semiconductor, along with a consortium of over 90 other companies, sought to remove the differences in available non-volatile memory (NVM) drives.  The result was a standard specification called NVM Express, or NVMe. Samsung was the first to introduce an NVMe drive to the market, providing a standardized, high-performance solution that was previously only available via expensive, custom solutions. As outlined in detail here, NVMe standard drives exploit the full potential of non-volatile memory while incorporating a feature set required by enterprise and client systems. They also extend the evolving trend of SSDs to reduce I/O latency while driving up overall performance. In fact, upon launch, NVMe drives boasted a 50%+ reduction in latency when compared to the already solid performance of SCSI/SAS SSDs. 

NVMe in Your Environment

Generally speaking, PCIe SSDs are ideal in environments where cache performance is critical. Data that is in high demand is held in cache, reducing the time needed to access that data, lowering latency, and improving application performance. Similarly, these flash drives accelerate log file writes, also resulting in increased application acceleration.

As such, customers that have broad OLTP or OLAP requirements will realize immediate performance improvements in their environment with PCIe SSDs from Samsung. Response times are reduced, transactions per second increase, and the maximum number of concurrent users goes up. This is particularly apparent in environments where there is a need to reduce the differences in performance between storage and the central processor, a need to add high-speed caching to an existing HDD tier, or an overall need to accelerate the performance of mission critical applications while maintaining the highest levels of data integrity.

When you add in the benefits associated with the NVMe standard, you can achieve performance gains across multiple cores to access critical data, enjoy scalability with headroom for current (and future) non-volatile memory performance, and leverage end-to-end data protection capabilities. As you consider your choice of SSD interfaces in your environment, you can refer to a growing list of benefits that can be achieved by deploying NVMe PCIe SSD. Dell PowerEdge Express Flash NVMe PCIe SSDs, powered by Samsung NVMe SSD technology:

• Have twice the performance of previous generational PCIe SSD devices
• Are front-access, hot-plug 2.5-inch PCIe SSD devices that support the ability to “hot-add” additional devices without the need to insert cards into PCIe slots that require taking a server offline
• Utilize device-based flash management, reducing overhead costs
• Support a wide range of applications including OLTP, OLAP, collaborative environments, and virtualization (where there is random access to data versus sequential access for reads and writes)
• Provide lower latency of data

To learn more about Dell PowerEdge Express Flash NVMe PCIe SSDs, please visit:

To lean more about Samsung SSDs, please visit:

Dell TechCenterMaking the Best SSD Purchasing Decision

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

Effective Upgrades to Boost Performance

Let’s face it, every computer user complains about performance. It’s an unfortunate fact of life, but over time, computers become slower. New applications, filling hard drives with data, and changes to operating systems inevitably lead to slowdowns. That causes user frustration and reduces useful system life expectancy.

So what’s the best way to boost client performance? Basically, it’s making the right component choices up front when purchasing a system. Making good choices at time of purchase boosts user satisfaction and useful system life. But what are the best choices?

When buying a desktop or notebook, there are three main component choices that effect performance. These are the processor, the RAM, and the storage. Let’s explore each in turn.

Processor -- This upgrade is most useful for those doing processor-intensive tasks that make you wait—like image manipulation, video or audio encoding, CAD/CAM, or scientific computing. Today’s multi-core processors streamline multitasking, especially when these intensive processes are involved. Faster processors can also help boost gaming. But for general business productivity, email, web surfing, a processor upgrade won’t help very much. Modern processors have no problems running current operating systems or applications without being bogged down.

RAM -- While RAM is an easy upgrade you can make, PC systems are limited in how much memory they can support. Beyond a certain amount used by most applications, having extra RAM will not be as cost-effective as upgrading your storage to get improved performance.

Storage -- Upgrading to a solid state drive (SSD) is one of the best choices you can make in terms of general speed boosts. An SSD can speed up your boot time and the launching of applications, as well as boosting shutdown speeds.

Upgrading your storage offers application-specific benefits as well. For example, with widespread use of high-definition video capture devices (such as those found on smart phones), more people are editing video on their computers than ever before. The speed advantage SSDs have over HDDs is critical here. Users will experience better responsiveness and changes taking effect more quickly. SSDs minimize the amount of time video users spend waiting and maximizing time spent creating.

Video games also benefit tremendously from SSDs. Unlike word processors or spreadsheets, which need to only load themselves and a document, when video games are launched they need to load the engines that power them and the graphics to display to the user. Depending on the complexity of the game, this can take anywhere from a handful of seconds to a minute or more. New levels or missions then need to be loaded as play progresses – and the longer the delay experienced by the player, the less satisfying their gameplay experience is.

Another type of application that benefits from an SSD is anything that relies on a database to function, and one type of software with a database most users encounter are local mail clients such as Microsoft Outlook. For users who never delete their emails and have tens of thousands of messages stored locally, the SSD will cause Outlook to not only load itself faster, but make all the messages it contains available to view and act on more quickly as well.

To conclude, choosing an SSD such as one made by Samsung Semiconductor, the most widely recognized SSD manufacturer in the world, is likely to provide the best opportunity to enhance system performance.  Popular SSD capacities offered for notebooks include 128GB, 256GB, and 512GB. 

For more information about Samsung SSDs, please visit:

For more information about Dell notebooks with SSDs, please visit:

Dell TechCentervWorkspace 8.5 Performance Benchmark

The inception of Wyse vWorkspace began in the early 2000’s with the development of management and optimization tools such as the Universal Printer and Automated Task Management. These tools, and others, were used to optimize and streamline Server Based Computing deployments and were very successful adjuncts to the base offering. In 2006 we released our first VDI connection broker. The stand-alone tools were packaged with the broker to create a single terminal server/virtual desktop brokering and management product. Since then it has blossomed into a highly scalable and competent solution.The product’s competency and scalability have been evidenced over the years by results of performance testing. These are benchmark tests to measure performance of the different vWorkspace components, such as the Connection Broker and the Secure Access Service.

The Dell Wyse vWorkspace Connection Broker does a lot. On an initial connection request, many steps are taken by the broker to provide a configured desktop environment to the end user. Evaluation of the connection and authentication of user credentials are only the first step in the process. The vWorkspace Broker will address the target types that apply to the session request and determine the appropriate resources that should be applied to the virtual desktop session. A target can be the end point’s IP address or device name. It can be the user’s account name, group membership or OU membership and any Boolean-based mix of these target types. Once target assignment is complete resource evaluation and resource assignment is executed. Resources can be any combination of the list below and are managed from a single interface:

  • Desktop and application policies
  • Environmental variables
  • Scripts (VBscript, PowerShell, Kix you name it)
  • Printer mappings
  • Network drive mappings
  • UI settings – Wallpaper, color scheme, etc

That’s a lot of stuff, and the list isn't exhaustive. There's more involved in establishing a session such as finding and referring an available virtual machine. Good thing load balancing and fault tolerance are built-in features of the Connection Broker.

Because so much is happening during a connection request we are continually striving to improve performance and optimize the user experience. With the release of vWorkspace 8.5 we have reached another performance milestone. Shown below are the results from our benchmark testing. The testing measures the time interval between an initial broker connection request and when that connection is established, for both RDSH and VDI.The vertical column is measured in seconds and shows a definite trend. As this trend continues we will have achieved time travel without the need for a flux capacitor.


In order to test the broker’s capacity we installed the vWorkspace 8.5 Broker and Microsoft SQL Server 2014, with a mirrored database instance, on separate Dell PowerEdge 720s and ran automated workloads against the environment with two load profiles: Light load was ~1000 users and Heavy load was just under 5000 users.  Each automated user session was performed 5 times and the time delta was averaged.

Since this solution runs on top of Microsoft RDS it is important that the time it takes to process everything be imperceptible to the user. Two years ago the logon process could be impacted up to 26 seconds in situations where the broker was being taxed. With version 8.5 that delay has been cut 10 fold to under 2 seconds.

Dell TechCenterIs 2015 Hadoop's Year?

A look at the role Hadoop is expected to play in 2015.(read more)

Dell TechCenterWilliam Edsall of Dow Chemical Provides His Thoughts on Why HPC Matters

Dow Chemical's William Edsall provides his thoughts on why HPC matters and other important industry issues at SC14.(read more)

Dell TechCenterResearch Shows Advanced Analytics is a Key CIO Priority

Petabyte, exabyte, zettabyte, yottabyte—some analysts predict it won’t be long before we’re talking about the brontobyte. With massive amounts of structured and unstructured data flowing through organizations at alarming rates it’s no wonder CIOs have placed advanced analytics at the top of their to-do lists. In the simplest sense, advanced analytics is the most direct way to ford these Big Data rivers to get to the information businesses need to make their best decisions.

 Recent research—conducted by the International Institute for Analytics (IIA) and sponsored by Dell Services—reported that 71 percent of the firms contacted indicated their company is actively using or has near-term plans to use analytics in everyday decision-making. But only 5 percent of companies using advanced analytics report actually using the high-volume or high-velocity data commonly associated with Big Data. Instead, the majority of firms seem to have their hands full with their own internal, “small” data. It’s not as if enterprises don’t want to use Big Data, but the research suggests there’s some prioritization that needs to happen regarding how firms should best proceed.

“Companies and leaders across industries are at a tipping point. We’re seeing our customers place a higher priority on using advanced analytics to execute on their digital business models," said Raman Sapra, executive director and global head of Dell Digital Business Services. “Ultimately, business leaders want to develop digital business models that enable them to attract, serve, and retain customers in the digital era. Advanced analytics provides that actionable insight that enables their transformation.”

The research was commissioned in order to study the advanced analytics landscape in the U.S. The findings included an assessment of advanced analytical maturity, trends and usage as well as how projects among mid-market and enterprise organizations are being executed. Survey respondents indicated a variety of stakeholders are involved, with CIOs and departmental business decision makers taking the lead. The research also found that—relative to company size—organizations are willing to invest significant resources in analytics programs. Two-thirds of mid-market organizations are investing more than $100,000 this year and a slightly smaller 63 percent of enterprise organizations are investing at least $500,000.

In total, the findings highlight an opportunity for businesses in all industries to determine how they plan to use advanced analytics to improve business results.

At Dell, we’re focused on helping customers transform and modernize for the future. As a digital and social pioneer, Dell has shaped its own offerings into Digital Business Services that enable companies to transform their businesses to adopt digital business models through technologies such as analytics, social media, mobile and cloud. 

Dell TechCenterAre You a Data Hoarder?

I’m starting to think I might be a bit of a data hoarder. I might add it to my list of potential new year’s resolutions. I came to this conclusion after reading something in Deduplication: Effectively Reducing the Cost of Backup and Storage.

The big question every IT manager has to ask him or herself is: what am I backing up? Chances are, they are backing up the same data — email messages that have been loitering in mailboxes for months, sales transactions from weeks ago, patient records that haven’t been purged, performance reviews from last year — over and over again. Whatever was in the database and got backed up yesterday got backed up again today and will get backed up again tomorrow and forever more until it’s not in the database anymore.

I’ve worked for Dell for over 10 years.  I’m not going to share how many email messages I’ve hung onto “just in case” or how many performance reviews I’ve squirreled away on the off chance I need them for some strange reason (I realize most likely nobody is going to ask me for my 2005 performance review ;) but just in case, I could produce it quickly!), or how many huge graphics files I have from product launches of long ago… (I really hope nobody from our IT department is reading this.) I don’t hoard physical things, but files and data are out of sight so how could that be considered hoarding?

 I have realized that if I am a data hoarder, then there are probably a lot of other people doing this too. And across any organization, that’s a lot of email and graphics files piling up and being backed up over and over. I did a quick search and found an Email Statistics Report from the Radicati Group which shows that this year, business users send and receive on average 121 emails a day (and of course that’s expected to grow to 140 emails a day in 2018…can’t wait). And that’s just email. That doesn’t even get into all of the social media videos and graphics.

Seems almost all of the customers we talk to are trying to do things better, faster, more efficiently, at lower cost or just plain smarter in some way or another.  We’re focused on finding ways to do things smarter within backup and recovery.  We’re looking for ways to help our customers spend less money, take back time that’s being swallowed up in something it doesn’t need to be, and move more quickly in the crazy fast business world we’re all in. We’re finding ways to design technology to improve the way data protection is done. It sounds a little hokey but it’s true.

Purpose built appliances are a great example of that. Think about it…you probably have some data hoarders within your organization. But you know you can improve your ROI by shrinking your footprint on secondary storage and data deduplication is a good way to do just that. Using up to 93% less storage space with a cost as low as $0.17/Gb can definitely help you spend less money. We’ve had customers tell us that it freed them up to do more in their environments. Add  the fact that appliances are turnkey solutions and that means  less dollars, less footprint, and just less of a burden on the day to day operations.

Hmmm, just got another “your inbox is almost full”…when will IT fix that?

Anyhow, if you’re interested in learning more about dedupe and appliances, check out this paper by Srinidhi Varadarajan, the Senior Distinguished Engineer, R&D - Development , here at Dell.


Dell TechCenterThree ways to benefit from workspace reporting

As your mobility program or bring-your-own-device (BYOD) initiative expands, it can be difficult to keep track of a growing number of users, devices, operating systems, applications and more. To address potential security concerns, demonstrate regulatory compliance and improve mobile enablement planning, you need clear visibility into what resources are being used, how they’re being used and who is using them.

Dell Enterprise Mobility Management (EMM) is a comprehensive mobile enablement solution for smartphones, tablets, laptops and desktops with built-in monitoring and reporting capabilities. Let’s look how those capabilities can help you maintain security, prove compliance and improve planning.

1. Strengthening workspace security

Monitoring and reporting capabilities available with Dell Mobile Workspace and Desktop Workspace — the two secure enterprise workspace components of Dell EMM — can help you quickly identify and address potential security issues. For example, you can create a report that shows how many instances of Mobile Workspace or Desktop Workspace are deployed per user, and you can set a maximum number for each person. Even though it would be extremely difficult for unauthorized people to access the content within a workspace, controlling the number of workspace installations helps minimize the odds that a device with a workspace will fall into unauthorized hands.

You can also track and report on the type of devices and operating systems used by employees, even if employees are using personally owned devices. If you learn of a new security issue with a particular operating system, for example, you could find users and devices that might be at risk and then quickly remedy the situation with a patch or another fix.

2. Proving compliance

Tracking and reporting capabilities can also help you prove compliance with industry and government regulations. With Mobile Workspace and Desktop Workspace, you can run a report from the workspace console that shows a complete list of users and devices that have access to regulated data.

Reports can also show how policies are implemented. For example, with Dell EMM, you can apply a policy to prevent mobile employees from copying and pasting data between corporate applications and personal applications. To prove your compliance, you can then log these file operations to provide regulatory officials with an audit trail of an employee’s activities.

3Improving planning

Reporting capabilities can also help you optimize the Dell EMM deployment across the enterprise. For example, you might initially provide Mobile Workspace and Desktop Workspace to all employees within a business group, but then generate a report that shows which users are using which workspace. Understanding usage patterns can help you match each workspace solution with the right user roles as you deploy secure enterprise workspaces across additional business groups.

If you’re using Dell EMM with enterprise-owned devices, you can also use reporting to help plan hardware and software upgrades. For example, you might have deployed Windows tablets with 32 GB of storage but need more storage to support an upcoming release of an enterprise application. Knowing how many devices are deployed and which business groups are using those devices can help you plan any changes you need to make.

Maintain user privacy

Monitoring and reporting will not infringe on the personal privacy of employees. When you use Mobile Workspace or Desktop Workspace with employees’ personally owned devices, you are managing only the secure enterprise workspace, not the personal environment. You can assure employees that you won’t be tracking their personal text messages, checking their personal browser history, viewing their personal photos and so on. That’s a key advantage of a workspace solution: You can retain full control over enterprise resources while allowing users to keep work and personal environments completely separate.

To learn more about Dell EMM, visit:

Dell TechCenterOpen Networking – Disaggregation is the New Black

Dell was the first tier-one networking vendor to kick off the network disaggregation discussion almost a year ago. We call it Open Networking and the idea is to provide an open alternative to the vertically-integrated, vendor-dependent and highly rigid model of the past 20 years.

Last week, we held a roundtable discussion in San Francisco to discuss our progress this year and I was pleased to be joined by JR Rivers, the co-founder and CEO of Cumulus Networks – our first Open Networking partner – as well as Dan Dumitriu, co-founder and CEO of Midokura, which is our newest partner in this endeavor.

The discussion highlighted the work that Dell and its partners are doing to disrupt a networking paradigm that needs to be challenged on behalf of customers. We’re actively connecting the dots with like-minded companies to upend the traditional, black-box model of networking by separating the dependencies of the hardware and the software running on top of it. Our Open Networking initiative is about being open, flexible and software-defined to help maximize our customers’ application environments.

Industry research firm Gartner, Inc. recently validated our approach in a paper entitled The Future of Data Center Network Switches Looks 'Brite,’” in which it introduces the term “brite box” switching.

As Gartner explains, this new approach essentially splits the difference between traditional networking and white-box switching, eliminating the challenges of both. According to Gartner, this allows network decision makers to “reduce cost, improve management and enable long-term innovation using ‘brite box’ switches versus traditional switching approaches.” 

These are some of the key benefits that Open Networking offers customers. Organizations looking for a new way to structure and run their networks can purchase a tested, validated, highly-engineered switch from a tier-one networking vendor, choose a networking OS to put on it, and have it backed up by world-class global services and logistics capabilities. These are things that simply aren’t available from traditional white-box providers.

As JR stated, "Cumulus Networks and Dell share a commitment towards an eco-system approach to modern data centers. Dell is accelerating the new software-defined infrastructure by providing enterprises the choice and economics enjoyed by mega-scale operators combined with a single source for procurement, installation and support.”

 Further validating our approach, Juniper Networks recently announced its own “open, cost-effective, disaggregated” switching platform in the form of an Open Compute Project switch that runs its Junos software.  Clearly this approach is picking up steam, and Dell is leading the way. Midokura, a global company focused on network virtualization, has become our latest Open Networking partner. At the roundtable we announced an agreement with Midokura to complement our networking and server infrastructure including a joint go-to-market program, validated reference architecture and global reseller agreement.

Midokura’s Enterprise MidoNet software on Dell infrastructure delivers a network virtualization overlay for OpenStack that helps enterprise customers and service providers create an agile, scalable and cost-efficient cloud networking infrastructure based on open technologies.

Additionally, with Dell Open Networking switches, the Cumulus Linux operating system and MidoNet, Dell is offering comprehensive network virtualization solutions for the software-defined data center. The combined solution enables a growing number of service providers and enterprise customers to provision scalable virtual networks to connect to physical workloads in a matter of minutes.

Dan from Midokura put it like this: “Midokura, like Dell, is committed to expanding the Open Networking initiative to meet the needs of today’s modern enterprises and help deliver an open foundation for compute, storage and networking infrastructure. We’ve already successfully teamed up with Dell to bridge virtual and physical networks and we look forward to deepening our collaboration to create an open, converged infrastructure for enterprises to support clouds that are easy to scale and operate.”

We’re proud of the progress we’ve made this year in helping customers build and operate the networks that they need – not the ones a vendor mandates. Customers, competitors and industry analysts are taking notice. We look forward to 2015.

Dell TechCenterWe Gave ‘Em Something to Talk About at Dell World 2014

Dell World 2014 took place more than a month ago, so it’s really great to see several of the more than 6,000 attendees are still talking about it!

Last week one of our PartnerDirect Premier partners, Interworks, posted key takeaways from Dell World on their blog:

“The most interesting and relevant session we went was put on by the CIO for Metropolitan Nashville Public Schools in Tennessee. As one of the largest school districts in Tennessee, their IT needs were extensive. They spent $7 million on wireless infrastructure alone. One of their unique challenges was scaling IT resources affordably. They did a great job of illustrating how even a $50 change could equal a huge spike in cost when scaled across 10,000 users. It was great to see how a school district went about implementing Dell solutions creatively as we have many clients in education.”

And yesterday, SiliconAngle posted an interview they conducted at Dell World with Ashley Gorakhpurwalla, vice president and general manager of Dell Server Solutions. He discussed with theCUBE how we are looking at the market and where their portfolio fits.

“They want to compete in the marketplace while still offering choice to the consumer, which is vitally important in hyperscale situations, where customers absolutely know what they want. With the marketplace evolving and shifting so rapidly, Dell wants to be in a position to tailor their offerings to customers more efficiently,” they noted.

(Please visit the site to view this video)

All together, we saw almost 900 unique stories published around the globe following Dell World 2014. ZDNet’s Ken Hess discussed how our innovation, products and acquisition activity has “introduced (him) to a whole new Dell” and Stuart Crawford of MSP Advisor said:

“Dell is working to make better technology and better ways of using that technology to not only make your job easier, but to help you grow your business.” 

And that’s really what we love to hear – that our driving desire to make it easier for our customers do more of what ever it is they want to do is coming through. Interworks said we “really are listening to [our] client base and are taking action in the right direction.”

That statement, to me, means the event was a success.

Did you attend Dell World 2014? Looking back a month later, what still stands out from it in your mind?

Dell TechCenterIdentity Manager named a Leader in all 4 access governance categories for KuppingerCole

Access Governance remains one of the fastest growing market segments in the broader IAM/IAG (Identity and Access Management/Governance) market. Over the past few years, this segment has evolved significantly. Access Intelligence, providing  advanced analytical capabilities for identifying access risks and analyzing the current status of entitlements is one of these additions. Improved capabilities in managing access risks are another. Some vendor have also added user activity monitoring to their products. 

We were actually named Leader in all 4 categories by KuppingerCole in the 2014 Access Governance Leadership Compass report 

  • Overall Leader
  • Product Leader
  • Market Leader
  • Innovation Leader

This Leadership Compass provides an overview and analysis of the Access Governance market segment, and the solutions available.

Read Analyst Report

Dell TechCenterClemson's Boyd Wilson Offers His Thoughts on Why HPC Matters

Boyd Wilson of Clemson and Omnibond offers his thoughts on why HPC matters and other issues facing the industry.(read more)

Dell TechCenterAdvantages of the DDR4 memory technology in 13th generation PowerEdge servers

The 13th generation PowerEdge server family, based on the Intel E5-2600 v3 family CPU, implements the new DDR4 system memory standard which offers advantages over 12G's DDR3 including faster speeds, increased bandwidth, higher density, more energy efficient and with improved reliability.

 Let's consider each of these advantages in more detail.

- Faster speeds: By 14% when populated at 1 and 2 DIMM per channel (DPC). This goes up to 40% at 3DPC. These frequencies will to scale up to 2400 MHz in 2015.

- Increased Bandwidth: That speed increase translates to higher throughput as this comparison using the popular STREAM memory bandwidth benchmark illustrates.


 To put actual quantities to these relative improvement percentages; here are measurements from PowerEdge models R630 and R620. Note these are with just a single processor installed. Total system bandwidth of a 13G launch platform has been measured at 120 GB/s!


 - Higher Density: DDR4 offers scalability for the future: Individual DIMM densities start at 4GB and go up thru 32GB today with 64GB availability in 2015. Imagine the 1U model with 1.5 TB or the 4U model with 6 TB for virtualization and big data applications. Here's how the mainstream DIMM size has been trending.


  - More efficient: With energy efficiency increasingly important with each new CPU/GPU architecture, regulatory certification requirements and server buying decisions; DDR4 enables just that as you can see here.

And so DDR4 system deployment will save not just electricity, but data center provisioning like backup power, cooling and physical space.

Even in a memory-rich PowerEdge T630 configuration running the especially intensive Linpack benchmark, we found that its DDR4 memory subsystem now accounted for just 4% of the overall system input power requirement. And again, this was while that memory sustained 16% better throughput than DDR3 could in the comparably configured 12G T620.

 - More reliable: DDR4 implements real-time write error detection/correction to its internal command and address busses, not just the data bus that DDR3 was limited to. DDR4 also adds built-in thermal monitoring with the provision to adjust its access timings should a temperature excursion occur.

Thanks for reading. In a future installment I'll talk on other 13G PowerEdge memory performance aspects including: access latency, Performance vs Lock Step memory configuration, RDIMM vs LRDIMM, Node Interleaving (NUMA vs UMA), 1x vs 2x refresh and CPU Snoop mode sensitivity. 



Dell TechCenterConsistent Server Virtual Disk Configuration for Exchange Databases using Dell PowerEdge PowerShell Tools

In an earlier post, we looked at the building block architecture for Microsoft Exchange 2013 deployments using PowerEdge R730xd. In the Pod architecture, each server in the deployment has a standard configuration. Each PowerEdge R730xd server in the reference implementations described in my earlier post has 16 LFF NL-SAS drives that are configured in a specific manner for storing Exchange databases. In the scale-up scenarios, an additional 12 LFF NL-SAS drives are used. This RAID virtual disk configuration must be consistent across all the servers for better manageability.

Configuring servers consistently ensures predictability and reduces variables while troubleshooting deployment issues. System management tools and automation help achieve this consistency with far less effort while reducing human error during solution deployment. While multiple system components must be configured before we complete the solution deployment, for today’s article, we’ll limit the discussion to creation of virtual disks to store Exchange databases. As a reference for this virtual disk configuration, we’ll use the disk layout described in an Exchange 2013 solution implementation on Dell PowerEdge R730xd servers. The following figure provides a high-level overview of the disk layout used in the reference implementation.

Figure 1 RAID LUNs required for Exchange Databases

The Large Form-Factor (LFF) chassis configuration of PowerEdge R730xd contains sixteen 4TB NL-SAS drives. The internal drive tray can host up to four of these drives. This is shown in Figure 1. The LFF drives in the front bay are numbered from 0 to 11 and the LFF drives in the internal drive tray are numbered from 14 to 17. The 2.5-inch SAS drives at the rear are used for deploying OS and these are numbered as disk 12 and 13.

The recommended layout for all RAID disks in the system is shown in Table 1.

Table 1 Recommended RAID layout in PowerEdge R730xd for Exchange deployment

RAID Volume


RAID Level


Disk 12 & Disk 13



Disk 0 & Disk 1



Disk 2 & Disk 3



Disk 4 & Disk 5



Disk 6 & Disk 7



Disk 8 & Disk 9



Disk 10 & Disk 11



Disk 14 & Disk 15



Disk 16



Disk 17


Apart from the RAID layout, the block size for these virtual disks should be set to 512.

To achieve consistent virtual disk configuration across servers, the RAID layout described in Table 1 must be translated into a re-usable template. This is where Dell’s systems management tools can help. The WS-Management interfaces on iDRAC help you export a system or component configuration as a Server Configuration Profile (represented as an XML file) and restore or import the configuration on a different system.

To access WS-Management interfaces using Windows PowerShell, you need to understand various Common Information Model (CIM) profiles available on iDRAC, identify the right CIM classes and methods to perform RAID device configuration, and finally use those interfaces in PowerShell through CIM Cmdlets. This is not easy to do without knowing how to use CIM profiles and PowerShell CIM cmdlets. The systems management team at Dell recently announced an experimental release of PowerShell Tools for Dell PowerEdge Servers. Using the cmdlets in this experimental release, you can export component configurations and import them on a different server is as easily as using any other PowerShell cmdlet.

The two cmdlets that we will explore in today’s article are Export-PEServerConfigurationProfile and Import-PEServerConfigurationProfile. The first cmdlet exports the component configuration to a network share and the later imports it to deploy the Server Configuration Profile.

Before you use these cmdlets,

Starting with PowerShell 3.0, if you have not disabled auto module loading, the cmdlets from any available module can be directly accessed without explicitly importing the module using the Import-Module cmdlet.

Using the Get-Help cmdlet, you can see a list of examples on how to use the Export and Import System Configuration Profile cmdlets.

Get-Help Export-PEServerConfigurationProfile

You also can see a list of examples about how to use these Cmdlets by adding the –Examples switch parameter with the Get-Help cmdlet. Each of these cmdlets requires a CIM session created to the iDRAC of the server we’re going to manage. This is done using the Get-PEDRACSession cmdlet.

#Credentials for accessing iDRAC

$DRACCredential = Get-Credential

#Create a CIM session

$DRACSession = New-PEDRACSession -IPAddress -Credential $DRACCredential

Once we have the DRAC session created, we can use the Export-PEServerConfigurationProfile cmdlet to export existing virtual disk configuration to an XML file. When using this cmdlet, we need to provide the NFS or CIFS share details along with the credentials required, if any, to access to the share. The Get-PEConfigurationShare cmdlet provides a method to construct a custom PowerShell object representing a CIFS or NFS share. Optionally, using –Validate parameter, you can also test if the network share is accessible from iDRAC or not.

#Share object creation for re-use

$ShareCredential = Get-Credential -Message 'Enter credentials for share authentication'

#-Validate switch ensures that the share is accessible from iDRAC.

$Share = Get-PEConfigurationShare -iDRACSession $DRACSession -IPAddress -ShareName Config -ShareType CIFS -Credential $ShareCredential -Validate

Finally, the Export-PEServerConfigurationProfile cmdlet can be run to export the component configuration as XML.

#Export Configuration of RAID controller

Export-PEServerConfigurationProfile -ShareObject $Share -iDRACSession $DRACSession -FileName VD.xml -Target 'RAID.Integrated.1-1' -ExportUse Clone -Wait

In the above command, observe that we’re exporting the configuration of only integrated RAID controller. By default, without specifying –Target parameter, the complete system configuration will be exported.

You will notice in the exported configuration file that some of the attributes are commented. Importing a Server Configuration Profile on a target system that is already configured can have destructive implications--which is why some configurations are commented. Therefore, to be able to import this system on a target system, we need to uncomment these lines in the XML.

Now that you understand how the export works, using the Import-PEServerConfigurationProfile cmdlet is easy. Before we can perform the actual import, we can preview if the configuration specified in the XML can actually be deployed on a target system or not. This is done using –Preview switch parameter.

#Create a CIM session

$DRACSession = New-PEDRACSession -IPAddress -Credential $DRACCredential

#Preview a configuration XML import

Import-PEServerConfigurationProfile -ShareObject $Share -iDRACSession $DRACSession -FileName VDGOLD.xml -Preview -Wait

If you do not see any errors after a preview, the XML configuration for the integrated RAID controller can be deployed successfully on the target system. So, to perform the deployment, we just need to remove the –Preview switch and run the Import-PEServerConfigurationProfile cmdlet again.

#Preview a configuration XML import

Import-PEServerConfigurationProfile -ShareObject $Share -iDRACSession $DRACSession -FileName VDGOLD.xml -Wait

The –Wait switch parameter ensures that a progress bar is shown to indicate what tasks are being performed while importing the Server Configuration Profile. Once you import the configuration, the target system gets rebooted and you will see that the RAID disks get created on the integrate RAID controller.

This method of deployment is error-free and repeatable. For a system administrator, this method also provides a scripting mechanism that can be used as a part of much bigger data center automation framework.

Dell TechCenterSSD in the Array? Today

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

By now, you have recognized that the performance gap between the CPU and hard disk-based storage systems has been increasing significantly every year. Until now, you may have even relied on popular workarounds to reduce this gap (e.g. RAID schemes and/or expensive cache memory). As this technical bulletin points out, traditional HDD storage systems increasingly struggle to meet the I/O performance requirements of data-hungry applications.

At the same time, the cost of flash storage continues to decrease. Moreover, flash storage can now be deployed in much greater capacities than cache, giving you the opportunity to reduce latency and improve overall performance. But what about using flash in an external storage array? With auto-tiering and other advancements, large amounts of flash storage (in the form of solid state drives (SSDs)) is becoming commonplace in the enterprise.

The Increasing Power and Value of SSDs

SSDs with a SAS interface are designed for use within an external storage array. They are primarily designed to handle high volume, write-intensive applications such as online transaction processing (OLTP), mail servers, and other common enterprise applications. Engineered to maximize write endurance, these drives provide long-lasting sustained performance by relying on advanced intelligence that promotes “wear-leveling” and other features required in the data center.

As an example, using industry-leading Samsung 12Gbps SAS SSDs as a second layer of cache in a storage array will help accelerate the transfer of data between a system’s DRAM-based cache and HDDs. This has the effect of reducing latency, as hot data is stored in the higher-performing flash storage cache, thereby reducing the need to read data from slower-moving HDDs. It’s ideal for systems that repeatedly access the same blocks of storage and then quickly change to repeatedly access a different set of blocks, an occurrence we often see with databases that handle online transactions.

Additionally, as the $/GB price of SSDs continues to fall, there are increased opportunities to make use of different drive types within an array. Many storage arrays include auto-tiering, a technology that stores more frequently accessed data on the fastest storage available and less frequently accessed data on slower (but typically higher capacity) storage options.

Maximizing Your Auto-Tiered Configuration

So what is key to maximizing an auto-tiered configuration? It’s knowing how much space you need. A simple way to determine this is to examine the daily amount of “data change” for the servers configured to use auto-tiered storage. If the amount of SSD storage is a little larger than the amount of data that is changing, then all of the most recently used data for any given day will remain within the SSD storage.

On some occasions, you even may consider deploying an SSD-only storage array.  While this will certainly deliver sustained performance improvements across the servers attached to the array, there are other considerations that should play a role in your configuration decisions (e.g., total capacity, cost, data protection, and redundancy requirements).

Knowing that a primary reason for choosing SSDs is performance acceleration, it would be best to first consider the business goals that are driving the decision-making process and how technology can help you achieve those goals. You might also want to refer to Samsung’s Green SSD website pages to see how SSD technology can solve your specific data center needs

Every application in your data center has different I/O requirements.  Understanding these requirements is the starting point for selecting the right SSD strategy. When TCO is a primary driver for storage selection, then flash best suits read-intensive applications with a random I/O pattern. Traditional HDDs may be more suitable for bulk storage that is infrequently accessed. In many cases, a combination of the two will allow you to maximize the advancements in flash technology while best addressing your specific needs and TCO requirements. 

For more information about Samsung SSDs, please visit

Dell TechCenterSolid State Drives: An Objective Comparison to HDDs

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

Solid State Drives (SSDs) and hard disk drives (HDDs) do the same basic job: each stores data. They contain operating systems, applications, and store personal files. But each uses fundamentally different technology to accomplish the same tasks.

What's the difference?

To answer the first question, HDDs use rapidly rotating platters of magnetic media to store data. This technology dates back to 1956, and it’s been refined over decades to be the default storage medium for desktops and laptops. On the other hand, SSDs have no moving parts, using integrated circuit as memory to store data. Though tested and proven for more than a decade, SSDs still aren’t as common.

Why would a user choose one over the other? Well, each technology has its advantages depending on the factors that are important to you.

Price: To put it bluntly, in terms of dollars per gigabyte (unit of storage), SSDs are more expensive. As of October 2014, a 500GB SSD retails for $269.00, while a 500GB hard drive retails for $64.99. That translates into $0.53/GB for the solid state drive, and $0.12/GB for the hard drive. Smaller SSD drives are more cost-effective, but for the most part, a price gap remains between SSD and HDD, though SSD prices have declined significantly over the past few years.

Capacity:  SSDs have another perceived challenge. Basically, the more storage capacity, the more data and applications (programs, photos, music, videos, etc.) a PC can hold. At the moment, Samsung offers a range of PC SSDs up to 1TB. But hard drives offer much larger capacities, up to 4TB. Realistically, however, for many business users this capacity discrepancy isn’t an issue because they would struggle to fill a smaller drive, much less a terabyte. It would only become an issue for anyone who handles large numbers of large files, such as heavy multimedia users or graphic designers.

So why choose SSDs?

Performance: Mainly, it comes down to performance. In one test conducted by an independent third party, a Samsung 840 EVO SSD completed a Windows boot cycle in 18 seconds, while the same laptop using a standard 7200RPM hard drive took over a minute (1:11) to perform the same task. Other tasks, include application launches and shutdowns show a similar performance boost. Normal operation also becomes faster so system responsiveness improves. Whatever the use, this extra speed may be the difference between finishing on time and failing to, and it certainly leads to better user satisfaction over time.

Durability: Solid State Drives offer a compelling durability advantage. With no moving parts, a SSD is more likely to preserve data in the event of impacts of physical damage. Most hard drives park their read/write heads when the system is off, but they are flying over the drive platter at hundreds of miles an hour when they are in operation. That basic capability makes SSDs fundamentally valuable for anyone who travels with their laptop or works in an environment with vibration or physical shocks, like a factory floor.

Fragmentation: Because of their rotary recording surfaces, HDD surfaces work best with larger files that are laid down in contiguous blocks. That way, the drive head can start and end its read in one continuous motion. When hard drives start to fill up, large files can become scattered around the disk platter, known as fragmentation. HDDs fragmentation causes system performance degradation over time, while SSDs, using solid state memory, don't have this limitation. Since there's no physical read head, SSD performance doesn’t decay due to fragmentation.

Heat and Power: Because HDDs rely on spinning platters, they consume more power than HDDs, which causes them to emit more heat, resulting in additional cooling needs. For desktops, few people care, but for laptops, which run on batteries, the power consumption difference matters. Some of the most power efficient SSDs are made by Samsung, consuming as little as 0.045W at idle. Also, since SSDs are so much faster, they finish tasks quickly, reducing the time spent at peak power consumption.

Noise: Even the quietest HDD makes noise. Noise comes from the drive spinning or the read arm moving back and forth. Faster hard drives make more noise than slower ones. SSDs make virtually no noise at all, since they have no moving parts.

Longevity: There’s a lot of discussion around longevity, but under average commercial workloads, a typical client SSD will last over 16 years.  That’s because SSDs have adopted a range of maintenance technologies to ensure data integrity, bad block management, error correcting code, and wear leveling. It’s possible that HDDs could have a longer lifespan, but for most users, the longevity difference is a moot point that won’t matter in real life, especially if impact resistance is taken into account.

What’s the conclusion?

Overall, HDDs currently hold an advantage on price and capacity. SSDs work best if speed, ruggedness, power/cooling, noise, or fragmentation (technically also part of speed) are important factors. If it weren't for the price difference, SSDs would be the winner hands down. Regardless, for many users, the benefits of SSDs outweigh the few advantages still held by HDDs.

Samsung, the world’s leading supplier of SSDs, offers a broad portfolio covering all of your capacity, form-factor, and interface needs.

For more information, please visit

Mark CathcartDVT and the American Way

As we approach the holiday season, I am reminded to check the American Way magazine to see if they’ve updated their advice about Deep Vein Thrombosis, aka DVT. My legs are sore from Sundays race, so better safe than sorry.

Looks like I’m still in with a chance to win the 100,000 award miles for my letter to the editor last month, after all they have not changed the advice. Here is what I wrote:

I’m in 22a of AA 1149, I’ve been here for about 3.5 hours since boarding. The guy in the exit row seat in front, despite a polite request, refuses to put his seat upright.

The space left between us is so small, I can only use my laptop as an oversize MP3 player, lid closed. I’ve read American Way cover to cover and it’s a great issue.

I did though find the diagrams for avoiding DVT hilarious. I can barely do the ankle rotations in the space I have, knee/chest lifts even knee lifts are simply not possibly. Perhaps you could update the diagrams?

Yours, 3-million miler(almost) 6ft triathlete with a 35 inch waist…

and yes, the following illustration is still there in the December 2014 issue.

See page 106

See page 106

After all, you wouldn’t want find out that you needed seat savers in order to prevent a real

Dell TechCenterWhat North American IT Administrators Can Learn from European Counterparts

It’s not exactly a news flash – Europe is a very different market than North America. Different languages; different currencies; different business demographics; different regulations; different approaches to common problems. But when it comes to data protection, backup and recovery, a recent technology spotlight from IDC UK shows many of the challenges faced by North American IT administrators are very similar across the pond.

After all, if one were to argue the “global economy” remains somewhat segmented because of currency differences, the “global data environment” is far more cohesive because of the simple fact that data is data, no matter where you are.

So rest assured, my North American data protection friends – you are not alone!

But what else can we learn from the similarities? Data may be the same across the globe, but as any folks who interact with our customers will tell you, some products are more popular in different parts of the world, which means our counterparts in other parts of the world are solving similar problems in different ways. That is an opportunity to learn something new!

This new IDC paper and survey focuses primarily on the growth of purpose-built appliances in Europe – a trend that is happening across the globe. European IT administrators are increasingly applying appliances to their top data protection priorities:

  • Ensure retention and compliance
  • Reduce storage-related costs
  • Protect virtualized servers
  • Expand storage capacity
  • Enhance disaster recovery

Do these priorities look familiar? I bet they do.

One interesting point in this survey is that “retention and compliance” is the top priority, where as I have seen that predominantly as a secondary priority in some North American studies. This is probably due to the pending ratification of the European data protection regulation, yet that does not mitigate the fact that North America, Europe and indeed the rest of the world are, and will be, subject to increasing compliance regulations.

So how do appliances address these priorities? Glad you asked.

  1. Focus – be it a backup and recovery appliances or deduplication appliance, appliances add focus to an environment, ensuring certain work is covered. The can be used to address any of the above priorities and can be deployed quickly and easily, usually within an existing architecture.
  2. Reduced costs – compression and deduplication can reduce storage costs and bandwidth requirements, directly impacting the bottom line.
  3. Flexibility – because of their focus, appliances add flexibility to both the deployment and operations of a backup and recovery environment.
  4. Scalability – because appliances can be easily deployed and are easily managed, they can reduce the need for human intervention and grow with a data environment.

Worldwide, organizations are seeing these benefits and turning to appliances more and more. In September of 2014, IDC release its worldwide sales numbers for worldwide purpose-built backup appliance (PBBA) factory revenues in the second quarter of 2014. Overall, revenue grew 8.4 percent year over year and is expected to continue to grow.

Over the next few weeks we will be discussing the benefits and intricacies of appliances – purpose-built, backup and recovery, virtual appliances, and more – so check back for new insights.


Dell TechCenterJimmy Pike of Dell Offers His Insights on Why HPC Matters at SC14

Jimmy Pike offers his insights on why HPC matters and other issues important to the industry. (read more)

Dell TechCenterNew Dell SonicPoint Series Enhances Wireless Network Security Solution

Wireless has become an imperative for virtually every type and size of organization. Businesses add wireless into their network infrastructure as they look to increase customer value and improve employee productivity through mobility initiatives such as BYOD. K-12 schools and universities use wireless as a means to provide students with a more connected educational environment while hospitals and dental offices utilize wireless to enable medical staff to access patient information while roaming.

Fueling this requirement for wireless is the continued proliferation of WiFi-enabled devices, both personal and IT-issued. Coupled with the increase in wireless devices is the use of bandwidth intensive applications including video and voice, HD multimedia and cloud and mobile apps. Together, this combination is driving the need for organizations to provide a growing WiFi user population with a high-speed wireless solution that significantly enhances their experience while at the same time maintains maximum security. 

SonicPoint product imageThe Dell SonicWALL Wireless Network Security solution featuring the new Dell SonicPoint Series of wireless access points helps small and mid-sized organizations achieve all three. The Dell SonicPoint ACe and SonicPoint ACi are built on the 802.11ac wireless standard, enabling retail point-of-sale businesses, schools and healthcare organizations to provide their employees, students and customers with high-speed wireless connectivity. At almost three times the speed of the previous 802.11n standard, 802.11ac gives users a better wireless experience. SonicPoints tightly integrate with Dell SonicWALL next-generation firewalls that scan wireless traffic for threats and eliminate them keep wireless traffic as secure as wired traffic.

“We know it’s critical to be able to provide our employees and customers with wireless access that is both dependable and secure, however, we need to do so in a way that won’t require costly set-up, deployment and maintenance. Dell SonicPoints enable us to provide the high-performing wireless access we need quickly and easily, and without having to constantly worry about whether or not this access is secure. In addition, we don’t need to buy a separate wireless access controller to manage the SonicPoints, as this is built into the Dell SonicWALL firewall, which saves us both time and money,” said Gerry Pollet, founder and CEO, Zapfi

“Our customers need to be able to provide their employees with dependable wireless access – it’s no longer a ‘nice to have.’ But even the best wireless access is meaningless if it isn’t secured – and smaller businesses are often susceptible to a wireless attack or significant data breach with the potential cost not just in lost revenue, but also in lost reputation, something that’s difficult to repair. With Dell SonicPoints, our customers benefit from the latest applications and services while ensuring their data is secure. Dell SonicPoints have helped us to maintain a competitive advantage in the market and have helped diversify our portfolio," said Deepak Thadani, president, SysIntegrators, LLC.

For some, deploying and managing a wireless network can seem like a scary proposition, not to mention expensive. However with the Wireless Network Security solution, setting up and managing a wireless network is neither. Dell SonicWALL firewalls include a built-in wireless control that automatically detects and provisions every attached SonicPoint. This saves time and cuts costs. Ongoing management is done through the firewall as well, so IT can manage both wireless and security through a single pane of glass. Having both security and high-speed wireless in one solution that’s easy to deploy and lowers TCO makes the choice simple. Customers tend to agree.

For more information on the new Dell SonicPoint Series and Dell SonicWALL Wireless Network Security solutions, visit our website.

Dell TechCenterProve compliance with industry regulations

 Launching a bring-your-own-device (BYOD) initiative? Expanding a mobility program? As you enable mobile employees to access enterprise information and resources in new ways, maintaining compliance with key data security and privacy regulations must be a top priority. For example, healthcare organizations must meet the privacy requirements outlined in the Health Insurance Portability and Accountability Act (HIPAA). Publicly traded companies must comply with the Sarbanes-Oxley Act (SOX), which regulates a company’s electronic records.

Of course, achieving compliance is just one part of the puzzle. You also need to be prepared to prove compliance to compliance officials. Whatever mobile enablement solution you choose must enable you to create audit trails and produce reports quickly and efficiently.

Dell Enterprise Mobility Management (EMM) is a comprehensive mobile enablement solution for smartphones, tablets, laptops and desktops that can help you protect data, maintain compliance and efficiently demonstrate compliance through extensive reporting and auditing capabilities.

Protect information, maintain compliance

Dell EMM offers a wide range of policy-based security capabilities to help ensure sensitive data stays protected and your organization remains in compliance with regulations. For example, Dell EMM employs a secure enterprise workspace approach that provides a distinct, protected environment on host devices that separates enterprise applications and data from personal ones. You can implement data loss protection (DLP) restrictions that keep users from copying data to their personal environment or sharing data beyond the workspace. Encryption capabilities help make sure enterprise data is secure whether at rest on a mobile device or in motion between the device and the enterprise network. And auto-lock/auto-kill features let you prevent unauthorized users from accessing enterprise data if a device is lost or stolen.

With Dell EMM, you also have granular control over who accesses information and how they get that access. So, for example, to help maintain HIPAA compliance, a hospital might want to restrict access to patient data by preventing each staff member from accessing enterprise email from more than one device. You can do that easily with Dell Mobile Workspace — the Dell EMM workspace component for Apple® iOS and Google® Android™ smartphones and tablets. Administrators set the policy for each individual or group. When a user logs in to the workspace, an agent identifier embedded in Mobile Workspace communicates with the back-end environment to confirm that the user and workspace (on that particular device) are allowed access to enterprise information and resources.

You can also use Dell EMM to quickly identify and resolve potential compliance issues. For example, you might support mobility by enabling employees to use personally owned laptops with Dell Desktop Workspace — the secure enterprise workspace component of Dell EMM that supports Windows-based tablets and laptops. You can configure the administrative console to send you an email alert if a required application is removed from the workspace.

Prove compliance

Dell EMM also offers reporting and auditing capabilities to help you demonstrate compliance to regulatory officials. In the case of the hospital above, administrators could run a report from the Mobile Workspace console that shows a complete list of users and devices that have access to patient information. You could also create a HIPAA-compliant group within the Mobile Workspace console — if HIPAA regulations would be relevant only to a select group of users — and then produce a report for just that group.

Dell EMM also allows you to apply policies that prevent corporate data from being exposed through applications on a device. For example, you can apply a policy to prevent mobile employees from copying and pasting data between corporate and personal apps. To demonstrate your compliance, you can log file operations and produce an audit trail of an employee’s activities.

You can tap into additional compliance reporting functionality using Dell KACE K1000 as a Service, which is a component of Dell EMM. The K1000 provides wizard-based reporting tools to help you simplify routine and ad hoc reporting. It includes pre-configured reports and allows you to create custom reports for specific needs, such as auditing and tracking key administration activities by time and owner. The solution’s dashboards and graphical reports help provide fast insights for regulatory compliance, software license validation and other management tasks.

The K1000 also lets you manage and run reports on a wide range of nontraditional devices, ranging from connected office printers to medical dialysis machines. The K1000 helps ensure that all the “things” that comprise the Internet of Things (IoT) remain compliant with key regulations.

Across all industry sectors, organizations are under growing pressure to protect private information and meet increasingly stringent regulatory demands. With Dell EMM, you can secure information, comply with regulations and prove that compliance while controlling administrative complexity.

To learn more about maintaining compliance with Dell EMM, read the white paper, “Implementing mobile/BYOD programs while maintaining regulatory compliance.”

Dell TechCenterThere's Still Time to Register for Today's #ActiveDirectory Webcast: Conquer Your Top 4 Challenges @DellSoftware

Managing the security and uptime of a Windows network requires you to master your Active Directory — the brain and heart of your network — with maximum efficiency. To effectively manage Active Directory, you have to overcome four key challenges...(read more)

Kevin HoustonQuestions Answered About Dell’s 2nd PERC in VRTX

I had a reader email me with a few questions about Dell’s PowerEdge VRTX 2nd PERC, so I thought I’d write about it with the intent on helping others.  If you have other blade-related questions (whether Dell or not) let me know and I’ll see what I can find.

I have a few questions about the (2nd) PERC controllers (on the VRTX) now that WB (write-back) is supported in a dual controller configuration and was wondering if you can help:
1. Are the PERCs still active/passive?
2. How are the caches on each card kept consistent?
3. Do the PERCs have have battery backup?
a. If so, how often do the batteries need replacing?
b. Do I need to power off the VRTX chassis to perform this replacement?
c. if the battery failed then the cards go into write through mode, is this correct?

These are all great questions that I had to go to the Dell product manager to find out.  To be completely transparent, I received an answer to these questions fairly quickly, but I’ve been procrastinating publishing this but here are the answers that were provided in regards to Dell’s 2nd Shared PowerEdge RAID Controller on the VRTX:

1. Are the PERCs (PowerEdge RAID Controllers) still active/passive?
Answer: yes

2. How are the caches on each card kept consistent?
Answer: In write-back mode, an IO does not complete back to the host until the cache on both controllers have been updated.

3. Do the PERCs have have battery backup?
Answer: yes

a. If so, how often do the batteries need replacing?
Answer: Batteries for Dell’s PowerEdge RAID Controller (PERC) have a 3-year warranty.

b. Do I need to power off the VRTX chassis to perform this replacement?
If the user already has 2 PowerEdge VRTX Shared PERCs, then the blades must be all powered down first.  Dell has the update procedures in the Dell PowerEdge VRTX Storage Subsystem Compatibility Matrix here:   

c. if the battery failed then the cards go into write through mode, is this correct?
If the battery on the active controller dies, then yes the VDs (virtual disks) will transition to WT (write-through).  If the battery on the passive controller dies, then the VDs will not transition to WT unless/until a failover occurs and the passive controller becomes active.   One note:  There is a case where if the passive battery dies, the VDs on the active may go to WT, but after a controller reset (i.e., chassis power cycle) the VDs will transition back to WB, as long the battery remains good on the active.


Thanks to the reader for submitting this question – I hope this helps you and many others looking or using the Dell PowerEdge VRTX.  As mentioned above, if you have other blade server related questions, for ANY vendor – please let me know and I’ll see what I can find.  Thanks for reading!
Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of  He has over 17 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Sales Engineer covering the Global Enterprise market.

Dell TechCenterDell, Software, and the Future of Backup: Q&A with Robert Amatruda

Last month, we at Dell Software had the pleasure of welcoming long-time industry analyst Robert Amatruda to the Dell family as the newest member of our data protection product team. Long one of the data protection industry’s most visible and well-respected third-party voices on issues relating to data backup, recovery and storage, Robert brings a wealth of insight and experience to the Dell team, and his presence will help us exponentially as we continue to enhance our portfolio of backup and recovery offerings.

As we prepare to turn the calendar to 2015, data protection is an industry in transition, as customers continue to seek new ways to optimize backup and recovery in the face of an evolving set of challenges. Likewise, the incumbent vendors in the data protection industry are undergoing enormous changes as well.  I recently sat down with Robert to discuss these industry and customer challenges, and why he thinks Dell is ideally suited to help customers address them.

Dell Software Data Protection banner

Q: As a long-time industry analyst with a unique perspective, what appealed to you about the prospect of working for Dell?

Robert: I’d been looking to transition out of the analyst community to pursue new opportunities that better leveraged my experience. As I looked out at the landscape of marquee companies in the IT space, Dell really stood out to me. It’s a company that has really stepped up and positioned itself well, and obviously, that starts at the top. Michael Dell realized much faster than other CEO’s in the industry that being beholden to Wall Street was not the way to innovate. As an outside observer, watching Dell go private was fascinating, and I was extremely impressed with Michael and the way he handled the challenge. When I added it up, I thought Dell was a place where I could do some really interesting and innovative things. Dell has all the tools and all the attributes needed to be successful, not only in data protection, but in adjacent markets as well.

Q: How do you see software fitting into the overall Dell picture?

Robert: Dell is already seen by customers as excelling when it comes to helping them put in place modern infrastructure they need in a fast and cost-effective manner. I think with software now, we really have the ability to expand that value-proposition even further. Dell has the opportunity to provide the end-to-end solutions that enable companies that drive business forward. Dell didn’t just go on an acquisition spree to compile features. It made judicious investments to bring together an end-to-end, solutions-oriented portfolio. There’s a lot of intrinsic value that gets brought to the customer when you can deliver comprehensive solutions, and those are the conversations we’re focusing on.

Q: What are some of the ways the data protection industry is evolving?

Robert: The IT buying center has really changed and diversified, especially when it comes to backup. We’re not dealing with “Billy Backup” anymore. We’re dealing with application owners, VM admins and other IT stakeholders who haven’t traditionally been tied to backup.  These people aren’t specifically immersed in backup products or process, instead they are more consumed with management, budget, time constraints, and application availability. The players have changed, so the discussion has changed. Data protection is no longer about ensuring the organization has a good backup somewhere if it’s needed. It’s about aligning a technology solution set around business operations.

Q: Why do you think Dell is positioned to address this need?

Robert: Where Dell really stands out in this space is through its ability to bring to the table a set of solutions that customers can mix and match in a manner such that they are more tightly aligning their infrastructure to those critical business considerations. We’re not pushing one-off products that may or may not fit into your environment. We offer integrated solutions, especially our appliances, that can be introduced in a non-disruptive manner, co-exist with your existing infrastructure, and deliver great benefits right out the box. It’s a powerful combination for customers.  

Dell PowerEdge servers

Dell TechCenterThe SSD Advantage: Breaking through the Performance Bottleneck

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

As virtual environments become more widespread and applications require greater performance, you may find that traditional hard drives are becoming a bottleneck. Shifts toward private clouds, in-memory computing, and highly demanding transactional workloads require leading-edge server performance.

There are three components that can significantly boost server performance: the processor(s), the memory, and the storage. Processors and memory have been speeding up every couple of years for decades. Hard disk drives (HDDs)… not nearly as much.

To circumvent this problem, you could use dozens or even hundreds of HDDs, pooling the performance of many to meet requirements. Using many HDDs will boost performance, but also delivers headaches in terms of costs, maintenance, power consumption, cooling and space constraints.

Servers weren’t designed to hold hundreds of hard drives, so those faced with high-performance workloads and saddled with HDDs have been forced to buy external storage expansion, driving up costs. Hard drives also use a lot of power and generate considerable heat, increasing power consumption and cooling costs. Finally, adding dozens of hard drives decreases reliability and increases management complexity.

Rethinking the Status Quo

There is an easy fix for these problems. Clearly, you will benefit from storage technology that can access data almost immediately (improved latency) and move large amounts of data quickly (increased throughput). You need storage that won’t consume much power, doesn’t produce much heat, and doesn’t cause cooling problems. Furthermore, you will greatly benefit from storage that’s highly reliable and doesn’t require extra management.

Not surprisingly, many companies like yours are now deploying flash-based, solid state drives (SSDs) that provide significantly faster access to data, resulting in increased performance and lower latency.

Unlike hard drives, which spin platters of magnetic media, SSDs have no moving parts. SSDs are typically made with NAND Flash, which, unlike RAM, stores data on chips for long periods of time. They were designed around enterprise application I/O requirements and their primary attributes are performance and reliability.

An SSD is probably the most cost-effective way to boost server performance. SSDs also fix power problems. A Samsung SSD, for example, consumes half the power of a typical hard drive. Further, SSDs generate almost no heat, and since one SSD can provide the performance of many HDDs, SSDs also can help solve issues with electricity consumption and cooling in data centers.

Moreover, SSDs help to minimize maintenance issues as they’re considerably more reliable than hard drives. They go into servers and storage arrays without additional hardware or management tools. And by sharply reducing drive counts, SSDs reduce complexity.

Demonstrating a Performance Delta

To give you a sense of the performance opportunities provided by SSDs, consider a recent analysis completed by Principled Technologies. Two PowerEdge R920 servers running Oracle with an OLTP TPC-C-like workload were tested. The first was configured with standard SAS hard drives, the second with Samsung NVMe PCIe SSDs. The performance delta between the two was quite significant.

While the performance of the PowerEdge server with the HDD configuration was good, the upgraded configuration with PCIe SSDs delivered 14.9x the database performance of its peer (meaning that it could complete nearly 15 times as much “work” as the standard configuration). This was accomplished with only one-third the number of total drives (8 SSDs vs. 24 HDDs).

Unquestionably, SSDs deliver the best cost benefit for highly-intensive workloads, including transaction processing and data warehousing. In other cases, like virtual desktop infrastructure (VDI) or high-performance computing (HPC), SSDs are also ideal.

Typically, when a workload requires very high capacity, a large number of high-capacity HDDs may make more economic sense. However, even in these situations, many servers now include a few SSDs to maximize boot, swap file and random access performance, while using HDDs for capacity optimization. Dell servers support use of an SSD and a HDD in the same chassis, at the same time.

Clear-Cut Benefits Drive Deployment

Samsung SSDs on Dell PowerEdge servers clearly benefit data center administrators and end-users. Buyers benefit by driving down acquisition and operating costs, while gaining more from reduced complexity and increased reliability.

But the major benefit comes from performance. Administrators will see immediate boosts in performance by deploying SSDs. They’ll be able to embrace performance-intensive workloads that were impossible to run on hard drives. End-users will experience better performance and increased uptime for current as well as new IT services.

The deployment of SSDs in enterprise environments is rapidly accelerating because the benefits are so clear-cut. No other technology has a better potential to literally transform your server (and data center) experience.

For more information about Samsung SSDs, please visit: 

For more information about the Dell PowerEdge R920 featuring the Samsung NVMe SSD, please visit: 

Dell TechCenterOpen Ethernet standards: enabling the next phase of Wi-Fi

Ethernet has become the ubiquitous choice of networking. There are a multitude of aspects that one might point to as the key to its success – low cost, ease–of-use, multi-vendor support, backwards compatibility, or simply being “good enough” at the right cost point to be an attractive solution.  At the end of the day, all of these characteristics contribute to its success, but one shouldn’t lose sight of the fact that the standards developed within the IEEE 802.3 Ethernet Working Group play a key role in enabling all these factors.

For a company like Dell, which evangelizes the concept of open standards, support for the IEEE 802.3 Ethernet standards is a fundamental aspect to our business model. As noted, these standards enable multi-vendor interoperability, and therefore the very competition that makes the networking industry the thriving entity that it is.

This is an important aspect of the Ethernet DNA that should never be forgotten by the Ethernet community. With this year being so tumultuous for the Ethernet community, as new applications and rates emerged, it would be good for all to pause and recall this important lesson. For example, consider recent events related to Ethernet’s role in supporting wireless access points in enterprise applications.

The deployment of Wi-Fi technology supported by a Gigabit Ethernet infrastructure has been quite successful in enterprise applications for a number of years. Wi-Fi performance will be getting a jump in performance with the introduction of 802.11ac technology. The figure below compares 802.11n against 802.11ac technologies for clients and access points.

This increase in wireless capabilities will stress the existing legacy cabling infrastructure, based on CAT5e and CAT6 cabling. Unfortunately, CAT5E is not specified for 10GBASE-T operation, and reach on CAT6 cabling is cable specific and may not reach 100m, which was the specified reach for 1000BASE-T operation. This is the third key trend I noted recently in my “Top 5 Networking Predictions” for 2015.

With adoption of the 802.11ac technology looking positive, and forecasts by many analysts even rosier, the impending potential pressure on the Ethernet cabling infrastructure will need to be addressed.

Two alliances, the NBASE-T Alliance and MGBASE-T Alliance have emerged to address this application space. As with the beginning of any organization, membership recruitment is expected. It is interesting to note while there are two alliances, there are multiple companies that are part of each alliance. And while the potential fighting between alliances may raise concern, the fact that there are two alliances should actually highlight the need for a solution.

In November, the IEEE 802.3 Ethernet Working Group formed the “Next Generation Enterprise Access BASE-T PHY Study Group,” which is the first step to be taken in a standardization effort. As this group begins to meet, it is imperative that all parties remember what the industry needs: a single standard that will enable multi-vendor interoperability. That is, after all, the Ethernet way!  

Dell TechCenterDispelling the myths: Uncovering the truth of SSDs

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

Thanks to incredible performance, reliability, and decreasing price points, solid state drives (SSDs) are becoming increasingly common in today’s desktops and notebooks. But since SSDs aren’t the most common data storage technology, some myths persist that can make some buyers wary. Let’s explore the truth of SSDs to see when and where they’re a good choice for desktops and notebooks.

Myth #1. “SSDs aren’t as fast as HDDs in some circumstances”

SSDs are really fast. A typical 7200RPM laptop hard disk provides about 200 IOs per second (IOPS). A typical consumer SSD, like the Samsung 850 PRO, provides thousands of IOPS on top of a massive reduction in latency by 10 times or more. In other words, an SSD not only sends more data at a time, it responds to requests much more quickly.

It’s true that HDDs perform well in sequential data access, which refers to accessing large contiguous blocks of data located on adjacent locations of a hard disk platter. But HDD performance on sequential disk access doesn’t outstrip SSD performance – SSDs are faster than HDDs at any task.

What does that mean for a user? On essentially every client application and task, an SSD will outperform a hard drive. Boot times are faster, application launches are faster, I/O intensive tasks are faster, and shutdown is faster. The quickest way to boost client performance is by adding an SSD.

Myth #2. “SSDs can’t be wiped securely”

Data security worries many users, especially businesses and governments because there’s a real risk to misplacing data. During an average notebook’s lifecycle, the system might move from one user to another, be deployed in another country, and then finally be decommissioned when obsolete. In any of these cases, it’s important to securely wipe the drive to avoid data security breaches.

It’s true that HDD wipe utilities can’t be used with SSDs, due to the technology differences. But it’s also true that most vendors supply software to facilitate secure wiping, like Samsung’s Magician. When an ATA Secure Erase (SE) command is issued, the SSD resets all of its storage cells by releasing stored electrons - thus restoring the SSD to its factory default condition. SE will process all storage regions, including the protected service regions of the media.

Myth #3. “SSDs are unreliable”

This is a common misunderstanding. HDDs are a tried and tested technology, while SSDs are newer, so many buyers wonder: are SSDs as dependable as hard drives?

“SSDs wear out.” It’s true, the memory cells used in SSDs have a limited number of read/write cycles before they burn out. However, consumer SSDs are engineered to account for this issue. Technology ensures that the drives wear evenly. Manufacturers also put “spare” memory on the drive (just like having spare tracks on a spinning HDD) that can replace dying or dead cells on a device.

Solid state drives also have wear-leveling, which sends each write to a different cell rather than writing to the same cell again and again. This produces evens wear and extends drive lifespan. Typically, workloads are read-intensive (usually two to three reads per one write) and reads don’t wear the cells on an SSD. Therefore, most application activity has no impact on the SSD’s operational life.

What does that mean in practice? A 256 GB SSD used in a corporate client environment that writes 40GB per day has an expected lifespan of over 16 years – your SSD will outlive the other components of your system.

Going beyond lifespan, SSDs are also more durable and reliable than their HDD counterparts. Hard drives, full of moving parts, are susceptible to damage caused by loss of power or physical impact. Solid State Drives are more robust simply because they don’t have moving parts, and are rated to be four times more shock resistant than their HDD counterparts.

A study by Samsung, Google, and Carnegie Mellon also shows annualized return rates (ARR) for SSDs that are twelve times lower compared to HDDs.

Dispelling the Myths and Next Steps:

Buyers don’t have to be worried about SSDs – they’ve proven their worth in millions of client deployments. It’s fair to say that SSD should be considered for most desktops and notebooks, especially those running performance intensive tasks, those being moved from site to site, or those where data security is an important consideration.

Samsung, the world’s leading supplier of SSDs, offers a broad portfolio covering all of your capacity, form-factor, and interface needs.

For more information, please visit: 

Dell TechCenterTop 4 Strategies to Improve Advanced Analytics – On-Demand Webcast

Marketing uses it to personalize offers. Finance uses it to detect fraud. Healthcare uses it to improve patient outcomes. Pharmaceuticals use it for product safety. Manufacturing uses it to monitor suppliers.

It’s advanced analytics, and we’ve put together an on-demand webcast about the top 4 strategies for improving how your organization uses them. Whether you’re still trying to figure out what to do with all the data coming into your company or you’re well on your way to making Big Data work, you’ll take away new strategies in four categories:

  • Agile environments – Agile has gone far past software development to extensible, open, flexible analytics handling real-time data.
  • Open platforms – To break down the silos that impede complete analysis of data, you need a “data-agnostic” analytic platform that is open and diverse, from databases to relational databases, from SQL sources to NoSQL sources and from public clouds to private clouds.
  • Templates and wizards – With their potential for encapsulating the expertise of a data scientist, your wizards and templates can share accumulated knowledge and insights with others, effectively distributing those skills across your organization and equipping even novices to analyze much more deeply.
  • Business alignment – If they start every project at the end, analysts will ask particpants, “How do we know we're done? How do we evaluate whether we were successful?” When you show other people what the desired results look like, everything else flows from there.

Join three experts from Dell Software for this 60-minute on-demand webcast: David Sweenor, product marketing manager for analytics; Thomas Hill, Ph.D., executive director of analytics; and Joanna Schloss, Dell subject matter expert.

Here are a few takeaways from the webcast:

  • “Analytics is not just analyzing data. Analytics is the process of embracing data as an organization and verifying and validating that the numbers are computed and generated as intended.” This applies to everything from the expiration date on a prescription to the composites in the wings of a Dreamliner.
  • “R today is really the Wikipedia of the analytics.” R continues to grow as a robust platform for analytic processes, for analytic validation and for analytic libraries of content. Some vendors support proprietary versions of R and proprietary versions of NoSQL sources. That locks down your ability to be agile in exploring your data.
  • “One of the biggest risks in any surgery is infection. It is an extremely expensive risk, especially to the patient.” If, while performing surgery, the hospital collects information, observations and tests, and then integrates them with electronic medical records and your personal and insurance data, it’s possible to reduce that risk.
  • “If you lend money to someone and you're not getting paid back, you have a big, big problem.” In the financial context, the essence of all analytics is accurately assessing the risk involved in our loan portfolios. That assessment includes modeling, wizards, flexible and heterogeneous data from sources as novel as Web-based lending, and even the application of business rules.
  • “The data scientist does not scale.” When finally done right, analytical tools will be automatic, easy to use and accessible, and they will generate predictable results that stand up to scrutiny.

The on-demand webcast is ready and waiting for you. Enjoy it, and send me any questions in the comments below.

Dell TechCenterSkip Garner of Virginia Tech on Why HPC Matters

Skip Garner of Virginia Tech offers his ideas about why HPC matters and other important industry issues at SC14.(read more)

Dell TechCenterPerformance Testing - Migrating Large Lotus Notes Databases to Office 365

I am continuing the review of how we do some performance testing of our Migrator for Notes to SharePoint (MNSP) tool. You may have read my earlier document on performance testing: Performance Testing - Migrating Large Lotus Notes Databases to SharePoint 2013

In this round of performance testing, I am migrating Notes documents from a custom Lotus Notes discussion database to Office 365. I hope that readers can use these results as a relative comparison for their own purpose. However, I caution everyone that their own performance test results can vary from these based on their own test environment and test databases.

To download Performance Testing: Migrating Large Lotus Notes Databases to Office 365, visit

To learn more about Dell’s solutions for migrating from Notes and Domino, visit

To download a trial copy of Migrator for Notes to SharePoint, visit

Randy Rempel
Senior Product Manager

Dell TechCenterTekDog Training Integration Now Available in Quick Apps for SharePoint 6.4

The Quick Apps for SharePoint team has been busy working on the latest release and just in time for the holidays Quick Apps for SharePoint 6.4 is now available. If you are not familiar, Quick Apps for SharePoint is a code free customization tool. With 21 different web parts Quick Apps helps you build applications that are easily supported, maintained and upgraded, ensuring their long-term impact and improving SharePoint ROI.

One of the improvements to the product we are excited about in version 6.4 are new quick start training videos provided by TekDog. TekDog provides a fantastic set of SharePoint User Adoption and Nintex Workflow training packages covering a variety of different topics and scenarios. TekDog provides individuals as well as enterprises creative and innovative methods to drive adoption rates and ROI. This is one of the reasons we made the decision to partner with TekDog because not only will they provide our users with easy to use Quick Apps training, they also deliver on other high value SharePoint related training topics. As a result our customers will benefit from the integration and training options TekDog provides.To learn more about TekDog training you can check out their website via the following link:

When you add a Quick Apps web part to a page a link to the training videos will be provided. Here is an example of a "how to" training video on creating a chart with multiple series of data.

In Quick Apps 6.4 the quick start videos will cover the initial configuration steps needed to get the web parts working. Previously used computer based training was difficult to navigate as some videos were quite long and it was difficult to find specific topics. Going forward each topic per web part will be conveniently split out into shorter videos to make them easier to find. Another advantage of the new video approach is that they are human lead. This probably goes without saying but the computerized voice of the older videos was a bit difficult to listen to over an extended period of time. Jason Keller from TekDog leads each of the Quick Apps videos and makes the training process not only more efficient but also more enjoyable. Going forward we plan to expand on the initial offering to also include advanced Quick Apps scenarios such as line of business integration, parent child, CAML filtering, and forms integration to name a few. Additionally, because TekDog is a premier Nintex training partner it gives us the opportunity to provide Quick Apps / Nintex workflow integration training as well. This is something we haven’t provided in the past but are excited to make this available within the “advanced” training scenarios next year. We feel the TekDog integration strengthens the products ability to deliver a easy to use code free customization tool that any SharePoint user in your organization can use.

Dell TechCenterDell 13th generation PowerEdge serves up both higher performance and lower power consumption

Dell's 13th generation PowerEdge R630 1U rack server, based upon the Intel Xeon processor E5-2600 v3 product family, proves capable of producing 9.5% more work and 19% better overall energy efficiency than a like-configuration of its two-year-old predecessor the PowerEdge R620. Even when idle, the R630 consumed 14% less power thereby saving a substantial 88 KWh of electricity per year.

The energy efficiency of PowerEdge servers, configured just like many enterprise data center customers would, has improved 163x over the past ten years. Given IT customers’ demand for servers that can perform more work while at the same time reducing a data center footprint, electricity use and TCO, Dell makes the engineering investment to provide just that.   


The SPECpower_ssj2008 industry-standard benchmark for measuring compute server energy efficiency was used for this comparison. For more information on how SPECpower_ssj008 works, see

The full white paper can be found here on TechCenter at




Dell TechCenterDelivering high performance for mainstream applications without wasting your budget or hampering flexibility

DISODData center managers are faced with a perpetual dilemma: finding the right infrastructure that supports the performance demands of ALL their Oracle applications, while balancing budgetary constraints and limitations.  They find that many applications and database instances require a lot of horsepower, but can’t cost-justify deploying on expensive, high-end infrastructure.  They are faced with a performance gap between traditional infrastructure based on common SAN technology, and high-end, proprietary infrastructure best suited to the top tier of mission critical data warehouse and analysis applications.  Building performance into the traditional environment generally involves new storage, new servers, and new licenses, as well as a great deal of time waiting for budget approval and implementation, and may still not entirely address the nature of the bottleneck.  On the other hand, deploying the wrong type of application on a high end, purpose built system generally leads to overcommitted infrastructure that runs very inefficiently, taxing expensive resources and underutilizing their capacity, and leading to expensive server sprawl to meet the performance need. 

Dell offers a solution for this “middle-band” of applications, one that balances performance with total cost of ownership.  The Dell Integrated System for Oracle Databases (DISOD) is an easy to procure, yet flexible to expand architecture perfectly suited to meet these needs.  Based on Dell’s Acceleration Appliance for Databases (DAAD), the DISOD couples two PowerEdge R920 servers, each with up to 4 10-core Intel E7v2 processors, with the ultra-fast performance of DAAD, a 12TB all flash storage appliance leveraging SanDisk technology that can deliver over a million IOPs, .51ms latency, and up to 5.7 gigabytes per second sustained throughput in a data warehouse or transaction processing environment.  Pre-installed with Oracle Linux with Unbreakable Enterprise Kernel, DISOD is delivered ready to be installed with Oracle Database 12c, 11gR2, or 10g, providing the flexibility to fit into any existing infrastructure.  And, the Dell hardware configuration in this article is fully supported on Oracle Linux with the Unbreakable Enterprise Kernel.  Find out more about this innovative architecture featuring Oracle Linux by clicking here.

Dell TechCenterAttention UK Mac Users - keyboard mapping

Hello Guys and Girls,

I was talking to a customer the other day about Macs and he mentioned that one of his bugbears was not being able to use his mac keyboard properly. For example, The \ key is obviously very helpful when connecting to a share \\server\share and when it's not mapped correctly, it's a real pain.

Searching the internet, it seems that this is a fairly common complaint when using a UK Mac Keyboard and connecting to a Windows Server/VDI via RDP.

Fortunately, MS have a utility called "Keyboard layout creator" which you can use to edit your keyboard mappings so that they keys are in the same place as your mac keyboard.

I've used this utility to create a new keyboard layout "United Kingdom - Mac" which I've installed into my master image. This means if I ever connect from my Mac mini, I can select the correct keyboard [:D]

I've attached my copy here - feel free to provide feedback or use the MS utility to create your own layouts.

Thanks, Andrew.

Dell TechCenterWindows Server 2008 Service Pack 2 performance on Dell PowerEdge R220

This blog post is originally written by  Tilak Sidduram from Dell Windows Engineering Team.

With less than a year left before Microsoft ends extended support for Windows Server 2003, including R2, most organizations still have some systems that need to be migrated to a newer version of the operating system (OS).  The best candidate to replace Windows Server 2003 is Windows Server 2012 R2, but some organizations might have to opt for Windows Server 2008 due to application or hardware support limitations.  If your organization will be moving to Windows Server 2008 you will want to read this blog; we talk about migrating to Windows Server 2008 SP2 (32-bit performance) on PowerEdge R220 and the various devices that are supported on this server.

Mainstream support for Windows Server 2003 ended in 2010 and the OS has been in extended support, now everyone’s well aware of the impending end of all Microsoft support in July, 2015 for Windows Server 2003. While Microsoft and Dell usually recommend that migrating to the latest shipping server OS, some customers are cautious, with a few needing/wanting to stick to a supported 32-bit operating system.  Windows Server 2008 SP2 was the last Microsoft operating system to be offered in a 32-bit architecture.

A year after Server 2008 was launched, Service Pack 2 was released, and this is the only supported version of the OS. This operating system is built from the same code base as Windows Vista. Therefore, it shares much of the same architecture and functionality of the client OS. It’s important to note that Microsoft Mainstream support for this OS ends in January, 2015, so any migration plans to this OS should be considered to be temporary at best.

Windows Server 2008 is available in different editions, but for the purposes of our bench testing with the PowerEdge R220, we have only considered Windows Server 2008 SP2 32-bit Standard Edition.  Dell does not officially support any version of 2008 SP2 on the PowerEdge R220 platform and does not recommend this OS/server combination for production use.

In this blog, we have outlined the performance of Windows Server 2008 SP2 32-bit with PowerEdge R220 and the various devices that can work on this server/OS combination. We will also describe the different deployment methods that can be used to deploy Windows Server 2008 SP2 on this server and the scope of testing that was performed.

Due to driver and OS compatibility issues with the Intel chipset on this server, the 64-bit variant of this OS is not viable. You should only attempt to deploy the 32-bit.

Listed below are the hardware peripherals supported on PowerEdge R220 that have 32-bit drivers. All of these drivers can be downloaded from the Dell Support website and then installed after the OS is deployed.

  • Intel chipset
  • Dell PERC S110 storage controller
  • Dell PERC H310 & H810 storage controller
  • On-Board SATA controller either in ATA or AHCI mode
  • Broadcom network controllers
  • Intel network controllers
  • Qlogic network controllers
  • Matrox video controller.

Windows Server 2008 SP2 does not contain inbox drivers for any of the below listed devices. Dell PERC S110 and H310 are the two storage controllers that are supported on R220, the drivers for these controllers are not available in Windows Server 2008 SP2 as inbox, so it’s necessary to download the required driver from the Dell support website and provide the driver at the time of OS installation in order to install the OS.

Dell supports different methods of deploying a Windows Server OS. Below is the list of Dell supported deployment methods that can be used to deploy Server 2008 SP2 on the R220. For more details on these, refer to this Dell Knowledge Base article.

  • Deploying Windows Server 2008 SP2 using the operating system installation DVD.
  • Deploying Windows Server 2008 SP2 using Dell Lifecycle Controller
  • Deploying Windows Server 2008 SP2 using Dell Systems Build and Update Utility

Testing coverage and scope:

To make sure we have full coverage and testing performed on this OS, various scenarios were taken into consideration and a proper test plan was put in place. The OS testing mainly covered OS installation using various supported devices like Optical Drive, Recovery media, USC, SBUU, PXE, iSCSI with HDD /RAID combination, using both S110 and H310 controllers.

System functionality testing was performed with Raid mode and non-raid mode configuration.  All listed network controllers were used and device specific tests were performed on those controller. Some of the features that were covered as part of this testing are Wake-On-LAN, ISCSI, Offload and Jumbo frame sizes tests. Firmware and driver update testing was performed to make sure that the devices gets updated successfully to the latest firmware and drivers version and continue to function after the update.

Windows has robust event logging and reporting features. Various tests have been performed to verify the logging and reporting of normal and abnormal events, when an event was triggered in the OS. To emulate real world user scenarios, various workloads specific tests have been executed on the server, at various CPU conditions and these stress tests have been run for a few days.

Most of the listed components on this server were put under various stress workloads and were checked for error, stability and system functionality. On a final note, we covered most of the scenarios to make sure that all listed devices were covered in this testing, however not all possible configurations were tested.

Dell TechCenterHow Technology Could Help Find a Solution to Online Child Safety

2014 marked the 25th anniversary of the Internet – one of the greatest innovations of our time and most valuable resources. However, while there’s little doubt it’s an incredible tool, it’s also a hugely dangerous one, with effective policing and security of online behavior still in its infancy.

As a parent myself, I’m forever wary of the dangers the internet poses not only to my own son but to the younger generation across the world. So I was delighted to recently join some of the most influential industry, law enforcement and government leaders at a 10 Downing Street-backed conference to debate and discuss future solutions to online child safety.

Tim Griffin of Dell UK meets with Prime Minister David Cameron

At Dell, we have a deep understanding of the global security market through our holistic and connected approach, which spans from endpoint to datacenter to cloud, and helps to solve today’s most complex security and compliance problems including fraudulent online behaviour. Over the last 12 months, we’ve been applying this industry expertise through our active involvement in the WeProtect programme – a forum set up by the Prime Minister’s Digital Advisor Joanna Shields to tackle the critical issue of online child abuse.

This week’s summit was an opportunity for everyone to reflect on the discussions they’ve had over the year and to put to work their expertise in applying practical solutions to some of the most pressing child exploitation issues. During the conference, industry leaders were split into six streams to address a particular challenge we could potentially tackle through the development of new technology solutions.

Joining us to look at the issue of identifying and protecting victims were Visa and two of our small business partners Relative Insight and Evidence Talks. Together, we looked at how Visa’s current metrics process to identify online fraudulent behavior, could be applied to conversations being had in chat-rooms by underage children. This is undoubtedly a complex issue but one we all felt could be addressed by empowering the young with the information they need to make their own decisions. The solution we came up with was a backend intelligence system linked into the central intelligence database to identify photos, information etc. sent by potential paedophiles to children.  This would allow children to be issued with an online warning if they are interacting with someone who is not who they say they are.

The feedback on our proposal, which Dell presented to the conference, was extremely positive with delegates enthusiastic around the potential of the solution we helped identify. Similarly the Rt. Hon. Teresa May MP, the Home Secretary and the Prime Minister, David Cameron made specific visits to our stand where we were demonstrating the solution and thanked us for the work we have undertaken and agreed on the future potential of our project. In fact, he was so supportive of our work that in his speech he subsequently made specific reference to the solution we are proposing.

It was a fantastic and really encouraging couple of days, with everyone focused on ensuring that some of the solutions presented can be quickly moved forward to help technology become an active protector of online child abuse rather than an enabler.

Mark CathcartThe Austin media lynch mob

aka The Interwebs attack

I’ve been online in one form or another since 1978. I had an “output only” blog at back in 1996. The one thing that has most visibly changed over that time is the attack dog that is the facebook thread, the blog comment storm, the faux outrage of people who have no real stake, no real interest, but for whom it makes them feel important by having an opinion, and better still when they can be outraged.

“Don’t read the comments!” has been the mantra for years but lately, it has been the mainstream that have become the lynch mob. Just this week here in Austin there has been a major pile-in from the mainstream local media over the name of a local PR company. Turns out the PR company chose a name with what at first appeared a hip name, that turned out to be a major historic, racial slur, refence.

I’m not going to provide links, or any other detail, it doesn’t matter what the company name was, or what the reference was to. I admit, as did a bunch of my friends, I had no idea either.

Yes, having been told, the company should have changed their name and all the steps that came with it. Eventually after the Interwebs piled in, they did change their name. Apparently, they slipped up again though, and fessed up to the new name before they secured all the relevant social media “properties” and a pretty unfunny paraody account appeared almost immediately on twitter. And then in piled the local media with commentary from people who were, for the most part working for media companies whose output, is staid at best. Sigh. You could sense they’d smelt blood and the company was theirs to web shame, to twitbomb, and did they ever.

Ever wonder how the outraged get incited? Ever wonder how these stories start, and how they get the inertia and ““go viral”? A new podcast, Startup has the scoop, on one of their own mistakes, what happened, how it came about, and how it was resolved.

Dell TechCenterBlack Friday weekend bloopers

We decided to monitor department stores, e-retailers and shopping web sites to see how well these websites behaved on Thanksgiving weekend and Cyber Monday. Darren Mallette and I created a Foglight environment that checked the main page availability for these sites,using monitoring agents from east and west coast. The startling conclusion we came to was that some of these randomly selected websites were NOT ready for a shopping weekend - and those that weren't probably lost significant revenue as a result.

Let’s see some of the Black Friday weekend bloopers (names have been obscured).

Site 1 – A well-known department store in the middle of Cyber Monday.



Site 2 – A well known electronics site on Black Friday morning


Site 3 – A retail chain on Thursday (11/27) mid-day.


Site 4 – A retailer on Cyber Monday morning.



Site 5 – A wholesale store in the middle of Black Friday.


These examples illustrate why it is crucial to monitor website performance and availability on a regular basis: to catch errors and slowdowns that can resurface, magnified, during the holiday sales season. Not being sufficiently prepared is costly. It impacts your brand loyalty and probably results in lost revenue for the business.


Dell TechCenterDell Precision Helps Design the World’s First Indestructible Cooler

NOTE: This post is the beginning of a Dell Precision “Purpose Built” series that started with last week’s post on Kenguru and Hargrove. ”Purpose Built” will share the stories of ground-breaking design that touch the lives of many people.   

Brothers Roy and Ryan Seiders grew up with a passion for the outdoors – hunting, fishing and traveling to outdoor industry tradeshows with their teacher-turned-entrepreneur father. But the coolers that were out there just weren't able to keep up with their outdoor adventures – the handles would break, the latches would snap off and the lids would cave-in. Not only was it a hassle to replace their coolers after each season, but these cheaply built, ordinary ice chests were limiting their good times.

A grizzly bear chews on a Yeti cooler - image courtesy of Yeti Coolers

That frustration led them to a solution – to design the cooler they’d use every day if it existed. It would be built for the serious outdoorsmen rather than for the mass discount retailers. Today, YETI Coolers offers the best premium coolers on the market and are known for delivering the ultimate in design, performance and durability.

“YETI represents an outdoor lifestyle and being outdoors with friends and family – that’s just truly a special experience and it’s not worth our time to sacrifice the experience based on cheap gear,” said Roy Seiders, CEO and co-founder at YETI Coolers. “We’re trying to bring a vision to market with a high-quality product and we’re going to use every tool possible and a big piece of that is technology.”

Today, YETI is bringing on new products, building a premium brand and defining what YETI stands for. And the technology the company’s  engineering team is using to develop these products from start to finish includes Dell Precision workstations and Dassault Systèmes SOLIDWORKS.

“When I was choosing a mobile workstation, it was really important that I found one that was certified to work with SOLIDWORKS,” said John Tolman, Engineer at YETI Coolers. “Whenever your computer and your software are acting smoothly and being really responsive, it just makes it so much easier to do your work and you’re more productive. Of all of the workstations that were certified for SOLIDWORKS, the Dell Precision workstations had the best specs. There’s no lag, it doesn’t matter how big a model I have open, whenever I click on something, it immediately responds to it.”

Using Dell Precision workstations and SOLIDWORKS allows YETI’s engineers to model out different products very rapidly. They’re able to take something from an industrial designer’s sketch and turn it into a solid model to have it 3D printed. The software also lets them show the rest of the team exactly what a product could or would look like so they can receive immediate feedback and make changes, making it possible to iterate faster and more often.

“Not only does it let us bring the product development cycle into a shorter timeframe, it also lets us make sure that we are back to our engineering philosophy of making sure that everything is just right, the way that we want it - and that it’s true to our brand and our design language,” said Tolman.

“Never in my wildest dreams did I ever think that YETI had the potential to be where we are today,” added Seiders. “With all YETI products we’re going to be pushing the envelope, taking no shortcuts.  We’re going to continue to innovate, creating products that add value and that are premium.” 

Watch the video to find out more about the technology and design process used by YETI Coolers to create their indestructible products. 

(Please visit the site to view this video)

Dell TechCenterAre You Using the Best Tools to Protect Your Organization’s Data?

It’s all about the data. Anyone will tell you that…data is the new currency. This idea is really what “Big Data” is all about:

Our systems now have the capability to capture, store, track, and evaluate all sorts of information. We share some of our information freely: filling out online forms or posting messages or images on social media sites. Some information we inadvertently leave behind as digital artifacts: geo-locational information shared by our cell phones, using a credit card or rewards card to purchase goods, even using search engines to find information.

As my friend Alistair Croll wrote a couple of years ago: Big Data doesn’t have to be all that big. Rather it’s about a reconsideration of the fundamental economics of analyzing data.

To be successful in a world where the true value in data is having it available for analysis, you had better be sure that data is actually being captured, preserved, and is always available. That job of capturing, preserving, and making data available falls to those of us responsible for managing and maintaining IT environments. One clear way to ensuring this valuable data is always available is to have a robust backup and recovery plan. Traditionally, the job of backing up the data collected by the applications that run business operations fell to backup admins. Often, the critical operational job of making sure backups completed successfully was assigned to the most junior member on staff (because feeding an angry tape robot is no one’s idea of fun). Planning recoveries was often an afterthought, not thought about or tested until an outage occurred.

From talking with customers, things don’t sound much different today when it comes to planning and staffing for data protection. If you are managing and maintaining IT environments today, not only do you have the traditional hardware environments to support, you’re probably also struggling to virtualize more and more, figuring out how cloud fits into your organization’s plans, and dealing with supporting all sorts of devices and services your business units want to use. The complexity and rate of change is enormous, yet in the back of your mind you know you need to have some sort of backup and recovery solution lined up for every single bit of it. But who has time to strategically plan that? Where to start?!?

As we’ve talked about in both  semesters of Backup.U, your backup and recovery plan should be built with your business stakeholders. After all, those stakeholders are probably relying on the data being collected by their applications to create value for your organization. Your first step is to know from those stakeholders what they expect. Does the data being collected all have the same value? Is some of it more valuable, and does it need to be treated differently? How fast does your business expect that information to be restored in the case of an outage? Once you understand what your business needs and expects, you can choose the data protection tools required to build your backup and disaster recovery plan.

One of the other key themes of Backup.U was the importance of evaluating the tools available in our industry’s collective data protection toolbox. Our industry has a pretty rich heritage for how business continuity should be maintained. In our modern data centers, there are so many choices for how to protect and restore data: backup, recovery, dedupe, compression, archiving, application-aware, array-based, cloud-based, I could go on and on. Some of the data protection tools that we still rely on are pretty old, for example we’ve been backing up to tape for about forty years. Some of the tools such as CDP and backing up to disk are newer. As my BackUp.U co-conspirator Greg Schulz says, “You need a firm understanding the basics, and you need to keep up-to-date on the tools, and that’s what will guide you as you construct the backup and DR plan that fits the needs of your business.”

(Please visit the site to view this video)

So, what is your current data protection plan? Are you using a single tool, or have you built your plan with several tools? What are the gaps in your plan, and what’s preventing you from closing them? I’d love to hear about your real world experiences with this in the comments, or on Twitter. Let’s learn from each other!

For some more insight on our own point of view regarding building a smarter backup, take a look at the tech brief below to see why we believe “One Size Never Fits All.”

Dell TechCenterManaging the Internet of Things

The number of connected devices is growing rapidly across the globe. In addition to the smartphone in your pocket, the laptop on your desk and the tablet on your bedside table, there are internet-enabled sensors embedded into just about everything around you — from automobiles and oil pipelines to printers and pacemakers.

These devices and the systems they are connected to form an Internet of Things (IoT) that is ushering in important changes for individuals, businesses and entire industries. For example, in the insurance industry, vehicle telematics solutions enable companies to offer pay-as-you-drive rates and to collect detailed information seconds after an accident. New fitness devices enable anyone to track progress toward personal health goals, while medical devices can send alerts to doctors about potential issues even before a patient experiences symptoms. Sensors on water, oil or natural gas pipelines help energy and utilities companies identify breaks before they cause disasters.

For businesses, IoT technologies can help monitor, control and optimize a wide range of assets. A connected printer, for example, can tell a business group when it’s time to replace paper or ink. An overhead projector can alert facilities personnel when a bulb stops working. Conference room seats equipped with sensors and connected to facilities systems can automatically turn off lights and air conditioning when the room is not in use, or let people know whether a room is currently available.

There’s no doubt that IoT technologies show great promise. But how do you manage all these new, nontraditional devices and sensors?

Managing traditional mobile devices and PCs can be complex enough. Whether your organization has a bring-your-own-device (BYOD) program in place or just issues a variety of enterprise-owned devices, you might need to provision, manage and support a very wide array of device types and operating systems. Adding the growing collection of sensors in your environment to that list could add significant time and effort.

Select a solution that support nontraditional devices

To combat that complexity, choose a management solution that supports nontraditional devices in addition to typical mobile devices and PCs. The Dell Enterprise Mobility Management (EMM) solution, for example, includes cloud-based components for managing smartphones and tablets (Dell Mobile Management) as well as for managing servers, desktops, laptops and an array of nontraditional devices (Dell KACE K1000 as a Service).

As the number of sensors under your management grows, K1000 as a Service helps streamline asset discovery and inventory. You can easily keep track of what you have and where it is.

With K1000 as a Service, you can also help maximize the benefits of the IoT by creating administrative alerts and automated actions based on sensor-generated data. In addition to sending alerts when it’s time to refill printer paper or replace projector bulbs, you can have the K1000 send an email alert to the facilities team if a sensor detects flooding in a bathroom. And you can also have it send an alert if a sensor hasn’t checked in with the K1000 in a certain amount of time — which could mean a device is malfunctioning or was stolen.

Of course, administrators will not always be in front of the K1000 console when a connected device or sensor demands attention. So D ell offers a K1000 Go Mobile Application that lets administrators conduct a full range of management tasks and track tickets from their Apple® iOS and Google® Android™ smartphones and tablets.

Keep IoT activity separate from personal activity

The ability to monitor, manage and control a range of business assets remotely can be a huge advantage for in-house staff as well as third-party building and facilities management firms. Receiving an email alert on a personal smartphone or laptop can help significantly reduce response times.

Of course, you want to make sure those email alerts stay secure. If a personal device is lost or stolen, you don’t want unauthorized people to gain a foothold into your organization’s facilities or assets.

The secure enterprise workspace components of Dell EMM can help provide that security. With Dell Mobile Workspace and Dell Desktop Workspace, you can separate enterprise applications and data from personal ones. An email received within the Mobile Workspace app can be accessed only by the person with the right login credentials. You don’t have to worry that an unauthorized person will be able to monitor or control any connected devices or systems.

Generate new insights
Beyond helping you manage a growing number of devices, solutions such as the K1000 can help you capitalize on the wealth of data collected by sensors and devices in the office and beyond. By integrating analytics and business intelligence (BI) solutions with the K1000, you could generate new insights for enhancing operational efficiencies and improving your working environment. For example, you could analyze data coming from sensors attached to vending machines in your cafeteria to gauge the most popular snacks and make sure you don’t run out. Or, you could use analyze the data collected by sensors on roadways to determine optimal driving hours and then implement flexible schedules for employees to minimize their commutes.

Whether you are looking to offer new types of insurance policies, develop connected healthcare solutions, prevent pipeline leaks or improve the energy efficiency of corporate office buildings, IoT technologies are likely to play a key role in your planning. Dell EMM can help you manage a rapidly expanding collection of nontraditional connected devices and sensors that comprise the IoT while controlling administrative complexity.

Dell TechCenterToad for Oracle: In-depth Q&A on SQL Execution Plans

How are you getting along with your SQL execution plan? Are you discovering why it’s called a plan?

In my last post I mentioned the Q&A that Bert Scalzo did after his webcast and that we’ve turned into an FAQ in Toad World.

I’m using this series of blog posts to highlight some of the FAQ, so in this post I’ll review the topic of . . .

SQL execution plans

Q: Is there a way to understand why indexes aren't being used, even if they are referenced in the WHERE?

A: Not really. For example, suppose WHERE GENDER = 'M' and I know there is an index in the column. Why did it not get used? If there are statistics gathered and Oracle knows that more than 20% of the values are male, then it will choose not to use the index. The explain plan does not show why things were skipped. Oracle shows only what it's going to use. That’s why it's called the "plan."


Q: Is there an option to import an explain plan from one database to another one through Toad?

A: No, but it’s a good idea for a new feature. You can go to Toad World and Toad for Oracle Idea Pond to add the idea - the more people that vote on it, the more likely it will get implemented.


Q: Can we get recommendations from Toad to update the Oracle environment variable so that SQL runs fast? For example OPTIMIZER_INDEX_COST_ADJ?

A: Yes. Auto Optimize and SQL Optimizer rewrite the SQL using different hints and/or different coding styles (e.g., sub select vs. JOIN). Toad also offers the ability to help you easily add hints. When you're in Editor, go to Main menu -> View -> Code Snippets, then choose Show hints in the Code Snippets drop-down. You can then see all the possible Oracle hints, and drag and drop them into your code.


Q: I probably missed it, but how do I know if I'm looking at the proposed vs. actual explain plan?

A: Right-click the explain plan and choose Show Changed Plan. Read the blog post “Explain Plans Can Be Deceiving.”


Q: I have often seen Toad execute SQL differently from SQL*Developer. Also, at times the explain plan indicates that it uses the right index, but on the actual run, it does not use the index. Why not?

A: Both tools call DBMS_XPLAN, which means that the database – not these tools – provides the explain plan. So the only reason you might see different results is in the options being passed by those tools to the database. For example, if I check the Toad option to use the cached plan,  that would tell the database to not look at the SQL text and do a plan, but to look in the SGA to find the actual plan. There are a dozen or more such parameters that can be passed to DBMS_XPLAN,  so you need to know in both tools which options you have set that affect the parameters being passed.


Q: Are there any specific joins like HASH or MERGE that are red flags to avoid in the explain plan?

A: It depends. For example, a full table scan is bad, right? Well, if I'm on Exadata, it's the preferred option. Sorry to be so vague, but there are a lot of variables.


Q: Can I use Toad to force a SQL to use a certain plan?

A: Yes, SQL Optimizer (part of the Toad for Oracle Xpert Edition) can do this.


Q: Repeat your statement about cost. Are we looking for a balance of improved elapsed time, while maintaining a lower cost? How do you gauge the tradeoffs between improved time and more cost?

A: Cost means nothing. Many people think low cost = best plan. It isn’t so. Often a low-cost plan takes longer to run than a higher-cost plan. So NEVER make SQL Optimization decisions based on explain plan cost alone. Read the blog post, “The Hitchhiker’s Guide to the EXPLAIN PLAN Part 14: Damn the Cost, Full Speed Ahead.


Q: Can you go over the "optimized" query and explain the differences?

A: When you call in to Auto Tune and/or SQL Optimizer, we display the original and the alternatives side by side. We show the original query text vs. the rewrite and places where we may have added hints, for example. We also show side by side the original explain plan vs. the rewrite explain plan. So you can see at a glance all the major differences.


You can look through all of the new FAQ in Toad World and listen to the original webcast, “Pinpoint and Optimize Inefficient SQL with Toad™ for Oracle®.” Next time, I’ll post the FAQ about SQL optimization.

Gina Minksmy backups take too long

I hear this all the time – my backups are taking too long (alternative – my backups are too slow). Most of the time, the backups take too long because of the massive amounts of data the people back up. Why are we backing up so much data now? Well, it’s because we store more – and one of the reason we can store so much more data is because storage is cheap.    

read more here

Dell TechCenterNew Authentication Services security modules for Redhat Enterprise Linux with SELinux fully enforced

Security has become an increasingly important consideration for organizations. Authentication Services has always held security as one of its most important and core functions.In keeping with this concept we have been working on modules to ensure that Authentication Services will work on a Redhat Enterprise Linux operating system with SELinux fully enforced. We have been testing and modifying these modules for some time now to make sure they will work with as many configurations as possible; however internal testing can only go so far.

Our goal is to ensure we have something that will be functional for as many environments as possible without additional configuration while remaining secure. As such we would like to solicit feedback from the Authentication Services community.A project has been started that includes access to the modules, instructions on how to implement them. The Authentication Services forums are available to provide feedback on anything you might discover or you would like to comment on.

As Hellen Keller once said alone we can do so little; together we can do so much. We invite you to work together with us to make this functionality as robust as possible. So join the conversation today.

For access to the project please visit our github page

To discuss the project or to ask any question please visit the All Things Unix Forum

** Please Note: These modules are considered test modules and therefore would not yet be fully supported. They are intended for test environments only. For assistance we ask that you post your questions or concerns to forum where the product team can will review and assist. **

Dell TechCenterSharePlex version 8.6 and SharePlex for SQL Server is available today!

SharePlex, world renowned as a leading Oracle high availability solution increases Dell’s expanding footprint of heterogeneous database support in version 8.6 with SharePlex for SQL Server. Version 8.6 enables IT staff to make copies of Oracle data in SQL Server databases to offload reporting, improve business intelligence, and implement an affordable archive and data warehouse. The new release brings the number of certified target platforms for SharePlex replication support to eight, including: Oracle, Microsoft SQL Server, Hadoop, SAP Adaptive Server Enterprise (ASE), Open Database Connectivity (ODBC), Java Message Service (JMS), and SQL and XML files. This heterogeneous support allows customers to streamline operations while maximizing the value of their data to drive better business insights.

A number of other noteworthy enhancements in SharePlex v8.6 include:

  • Replication to ODBC enabled target systems
  • Replication to XML file
  • Support for Oracle 12c multitenant database allows replication to/from/among Oracle Pluggable Database (PDB) and Container Database (CDB)
  • Better process statistic details in the trace command summary and enhanced export details
  • Improved performance to “Post” processing with 2 new added parameters to improve throughput when applying data to Oracle targets. The 2 new options allow users to increase level of concurrency and reduce number of commits.

Try these new SharePlex features, download your free 30-day trial today.

Dell TechCenterEmail and Christmas Cookies

You might not think the two of these things have much in common right? Well consider this: You are the administrator of an email system and one of your users calls you with a problem. They sound worried like they recently lost an important piece of data...(read more)

Dell TechCenterDell PowerEdge Bests HP in SAP HANA Performance using the SAP BW Enhanced Mixed Load (EML) Benchmark

With the publication of two additional world record results, Dell's PowerEdge R730 and R920 servers are once again showing their industry leading performance when running SAP benchmarks. This time, the benchmark is SAP's Business Warehouse Enhanced Mixed Load, or BW-EML Benchmark, which stresses ad-hoc, near real-time querying and reporting on large data warehouses. The key performance indicator of the benchmark is the number of ad-hoc navigation steps per hour for a given number of initial records.

At the 2 billion initial records level, we see the R920 with 137,010 ad-hoc navigation steps per hour, edging out the next highest result of 126,980 ad-hoc steps achieved by the HP ProLiant DL580 Gen8 server. Both database servers have four processors from Intel's Xeon E7-4800 v2 family.

At the 1 billion initial records level, we see the R730 with 148,680 ad-hoc navigation steps / hour, edging out the next highest result of 129,930 ad-hoc steps, achieved by another HP server, the ProLiant DL580 G7. With just two processors from Intel's Xeon E5-2600 v3 family, the R730 is able to top the performance of the HP server with its four processors from the Intel's Xeon E7-4800 family.

As mentioned before, the PowerEdge R730's industry leading performance running SAP benchmarks is nothing new. In September, Dell published a world record two socket SAP SD Two-Tier result of 16,500 benchmark users, surpassing the results published by HP, Cisco, and IBM.

Similarly, the PowerEdge R920 is also already a world record holder on SAP benchmarks, having the world record for four socket performance on the SAP SD Two-Tier benchmark with a result of 25,451 benchmark users.

The two SAP BW-EML results highlighted here, and Dell's world records on the SAP-SD benchmark, show Dell's commitment to world class performance in mission critical applications. More SAP benchmarks are underway, so stay tuned!

Details of these comparisons are summarized in the tables below.

For more information on the SAP SD and BW-EML benchmarks, visit

Dell TechCenterDell PowerEdge VRTX through the eyes of the customer

Ed. note: this post was authored by Charlie Rice, Information Systems Analyst, Trico Corporation

For more than 90 years, Trico Corporation has provided our customers lubrication management solutions focusing on industrial equipment performance and reliability. We’ve done this by combining high-performance, globally-recognized lubrication products, with our proactive lubrication management training, in-plant services, and oil analysis services. When it came to updating our data center with new server technology we looked to match our customer promise of performance and reliability while saving space, time and money.

To replace our pre-existing, outdated servers, we looked into several solutions before settling on the Dell PowerEdge VRTX. We selected the two-node model with two PowerEdge M620 blades and 13 hard drives totaling almost 12 terabytes of storage. VRTX brings tremendous value to our business, and has significantly simplified our infrastructure.

(Please visit the site to view this video)

We’ve removed armfuls of tangled cables and cords, and eliminated racks of servers and storage while still improving performance. The VRTX radiates less heat and is far quieter than other solutions. Basically, our solution is so cool, quiet, and simple, sometimes we forget it’s even there.

Read more about the success we’ve had with the solution in our case study

Dell TechCenterMigrating to Office 365 is like driving a Lamborghini LP570 Superlegerra

I recently had the pleasure of driving this beauty at an event in Las Vegas. I know tough life... but it got me thinking how wonderful life would be if you could migrate content into Office 365 (specifically focused on Exchange Online) as fast as...(read more)

Dell TechCenterRegister for Tuesday's 20-Minute Tech Demo: Conquering Your Top 4 #ActiveDirectory Management Challenges @DellSoftware

Managing the security and uptime of a Windows network requires you to master your Active Directory — the brain and heart of your network — with maximum efficiency. To effectively manage Active Directory, you have to overcome four key challenges...(read more)

Dell TechCenterDell’s Channel: Reflections on 2014 and Looking Ahead to 2015

Wow, what a year it has been! I started as Dell’s new Channel Chief in November 2013, taking on the amazing opportunity of expanding our channel efforts to help our partners succeed together with Dell.  We kicked off 2014 with brand new additions and changes to our PartnerDirect Program announced at Dell World last year. These changes included consolidating the channel and direct sales teams into a single organization to drive collaboration in the channel and offering a 20 percent compensation accelerator for any sale of PowerEdge VRTX, storage, networking, software, thin client, workstations and SecureWorks solutions to a new customer which was delivered through the channel.

Dell networking switches and thin clients

Today I am happy to report that in our most recent quarter, the channel now represents 40 percent of Dell’s global revenue and Dell channel revenue is up double digits in 10 of our top 11 markets. Together with our partners, we are outgrowing the market by 3-4X and we have generated over 82,000 new deal registrations, which is a 9 percent increase over last year. We have also been applauded in the media for all of our strong work within the channel and have won 30 awards and Editor’s Choice selections over the past year. In a recent article CRN reports, “A year after winning its $24.9 billion hard-fought battle to go private, Dell is on the attack and said it's set to take on an IT industry in upheaval, delivering ‘reliable and predictable’ end-to-end IT solutions in contrast to uncertainty at rivals including Hewlett-Packard and IBM.”

Another big development this past year was the integration of software solutions into the PartnerDirect program. We have experienced huge software growth and partner engagement, and in October had a sell-out crowd at our first annual Peak Performance Partner Conference for our software channel partners in Orlando, Florida. Media reports from CRN summed the event up as, “The Dell Security Peak Performance conference is the first time in three and a half years that Medeiros and the Dell SonicWall security team have held their own security conference for partners. It comes with Dell stepping up investment in its security business, which includes some 5,000 employees throughout the company devoted to security and Dell now managing 60 million security end points.”

Tiffany Bova and Cheryl Cook on stage at the Dell PartnerDirect Summit at Dell World 2014

As we look ahead to next year, we have a lot of exciting new programs and incentives to offer our channel partners and customers. As just announced at this year’s Dell World, we are committing $125 million in enhanced incentives and programs, including the Dell Storage Accelerator, Windows Server 2003 Migration, and Growth Accelerators for the Client Solutions and Enterprise Solutions Groups. We are also offering new Core Client Solutions and Workstation competencies to engage and reward our partners as well as new advanced competencies in Storage and Identity and Access Management to open up opportunities for partners who prefer to focus more deeply on these solutions in the coming months. Partners can maximize their technical and sales skills, as well as achieve Premier Partner status and benefits through these advanced competencies.

We also announced that we are investing tens of millions of dollars to enhance our IT systems and provide additional tools for lead management, deal registration, training and certification, navigation, and mobility and are accelerating our efforts to enhance the online and offline partner order experience.

We love feedback and have been fortunate to hear from partners giving us their firsthand accounts of how the program enhancements in the last year have helped their businesses grow:

“Our Dell business is up over 50% and is outpacing our total business growth. Our company is growing faster with Dell than with other partners. Why? Superior products and Dell’s end-to-end solutions story is really helping our business grow.” - Jason Cherveny, President and CEO - Sanity Solutions

“Because of Dell’s channel program, our business has grown over 50%. We’ve experienced 3X the server growth. 2X the Networking growth. 2X the Software growth. Storage is up 25%!” - Scott Winslow, President and Founder - Winslow Technology Group

For our partners, in closing the year and this blog, please look for all of these new and improved solutions and programs and much more to come in 2015. We appreciate your business and would love to hear from you! Please send us your thoughts and suggestions either through your Dell representative or in the comment section below. I also encourage you to engage with me on Twitter via @CookCherylS. We hope you all have a safe and happy holiday season and I’m looking forward to continuing our partnership in the New Year!