• About Adam Bergh

The Partly Cloudy Blog

Data Center , Storage and Cloud Computing Expert Madison, Wi

Veeam to Deliver NetApp HCI Native Integration

October 23, 2018 By Adam Bergh Leave a Comment

Note: This blog originally appeared on the Veeam.com Blog.

Four years ago, Veeam delivered to the market ground-breaking native snapshot integration into NetApp’s flagship ONTAP storage operating system. In addition to operational simplicity, improved efficiencies, reduced risk and increased ROI, the Veeam Hyper-Availability Platform and ONTAP continues to help customers of all sizes accelerate their Digital Transformation initiatives and compete more effectively in the digital economy.

Today I’m pleased to announce a native storage integration with Element Software, the storage operating system that powers NetApp HCI and SolidFire, is coming to Veeam Backup & Replication 9.5 with the upcoming Update 4.


Key milestones in the Veeam + NetApp Alliance

Veeam continues to deliver deeper integration across the NetApp Data Fabric portfolio to provide our joint customers with the ability to attain the highest levels of application performance, efficiency, agility and Hyper-Availability across hybrid cloud environments. Together with NetApp, we enable organizations to attain the best RPOs and RTOs for all applications and data through native snapshot based integrations.

How Veeam integration takes NetApp HCI to Hyper-Available

With Veeam Availability Suite 9.5 Update 3, we released a brand-new framework called the “Universal Storage API.” This set of API’s allows Veeam to accelerate the adoption of storage-based integrations to help decrease impact on the production environment, significantly improve RPOs and deliver significant operational benefits that would not be attainable without Veeam.

Let’s talk about how the new Veeam integration with NetApp HCI and SolidFire deliver these benefits.

Backup from Element Storage Snapshots

The Veeam Backup from Storage Snapshot technology is designed to dramatically reduce the performance impact typically associated with traditional API driven VMware backup on primary hypervisor infrastructure.

This process dramatically improves backup performance, with the added benefit of reducing performance impact on production VMware infrastructure.

Granular application item recovery from Element Storage Snapshots

If you’re a veteran of enterprise storage systems and VMware, you undoubtably know the pain of trying to recover individual Windows or Linux files, or application items from a Storage Snapshot. The good news is that Veeam makes this process fast, easy and painless. With our new integration into Element snapshots, you can quickly recover application items directly from the Storage Snapshot like:

  • Individual Windows or Linux guest files
  • Exchange items
  • MS SQL databases
  • Oracle databases
  • Microsoft Active Directory items
  • Microsoft SharePoint items

What’s great about this functionality is that it works with a Storage Snapshot created by Veeam and NetApp, and the only requirement is that VMs need to be in the VMDK format.

Hyper-Available VMs with Instant VM Recovery from Element Snapshots

Everyone knows that time is money and that every second that a critical workload is offline your business is losing money, prestige and possibly even customers. What if I told you that you could recover an entire virtual machine, no matter the size in a very short timeframe? Sound farfetched? Instant VM Recovery technology from Veeam which leverages Element Snapshots for NetApp HCI and SolidFire makes this a reality.

Not only is this process extremely fast, there is no performance loss during this process, because once recovered, the VM is running from your primary production storage system!


Veeam Instant VM Recovery on NetApp HCI

Element Snapshot orchestration for better RPO

It’s common to see a nightly or twice daily backup schedule in most organizations. The problem with this strategy is that it leaves your organization with a large data loss potential of 12-24 hours. We call the amount of acceptable data loss your “RPO” or recovery point objective. Getting your RPO as low as possible just makes good business sense. With Veeam and Element Snapshot management, we can supplement the off-array backup schedule with more frequent storage array-based snapshots. One common example would be taking hourly storage-based snapshots in between nightly off-array Veeam backups. When a restore event happens, you now have hourly snapshots, or a Veeam backup to choose from when executing the recovery operation.

Put your Storage Snapshots to work with Veeam DataLabs

Wouldn’t it be great if there were more ways to leverage your investments in Storage Snapshots for additional business value? Enter Veeam DataLabs — the easy way to create copies of your production VMs in a virtual lab protected from the production network by a Veeam network proxy.

The big idea behind this technology is to provide your business with near real-time copies of your production VMs for operations like dev/test, data analytics, proactive DR testing for compliance, troubleshooting, sandbox testing, employee training, penetration testing and much more! Veeam makes the process of test lab rollouts and refreshes easy and automated.

NetApp + Veeam = Better Together

NetApp Storage Technology and Veeam Availability Suite are perfectly matched to create a Hyper-Available data center. Element storage integrations provide fast, efficient backup capabilities, while significantly lowering RPOs and RTOs for your organization.

Find out more on how you can simplify IT, reduce risk, enhance operational efficiencies and increase ROI through NetApp HCI and Veeam.

Filed Under: Data Center, NetApp

And Now For Something Not So Completely Different…

February 27, 2018 By Adam Bergh Leave a Comment

Today I am happy to announce that I’ve taken a position with Veeam to be their new Global Alliance Architect for NetApp. In this new role I will be focusing on continuing Veeam and NetApp’s already deep technology integrations and expanding them into new product categories and new cloud integrations and capabilities. I’ll be serving as the subject matter expert in NetApp technologies and be the primary interface to the technical resources from both Veeam and NetApp.

For the better part of the last ten years I have been working exclusively in “the channel” as we often refer to it. We often call ourselves consultants or value-added resellers and systems engineers. We partner with the major vendors, OEMs, and manufactures, to engineer best of breed and often cutting-edge data center solutions to meet very specific customer requirements. Throughout the years working with dozens of the industry’s most widely known and respected companies, we inevitably end up focusing in on certain technologies and vendors that just seem to be cut above the rest. Whether they be from a technological or usability advantage, or from an ease of doing business standpoint, a few companies always just stand out.

If you’re a regular reader of this blog, you know there’s been no secret about my love for enterprise storage solution vendor NetApp. NetApp’s portfolio of enterprise storage and cloud solutions, in my opinion is the best in the industry, and from a pure technology standpoint, is the most robust and complete solution set available from one company.

Being the data center junkie that I am, I know the key strategic value of data protection, application recoverability, and business continuity. There has always been one company that leads in the space and integrates into NetApp’s portfolio better than anyone else in the industry: That company is Veeam Software. Having sold and implemented Veeam’s software packages for many of my strategic clients, through personal experience I can absolutely vouch that Veeam truly is “Availability for the Always On Enterprise” and it truly does “Just Work”.

I can’t stress how excited I am to be “marrying” my love of two great tech companies and getting to be able to have a hand in bringing incredible new joint solutions into this market space.

A a huge THANK YOU need to go out to my previous home, Presidio Networked Solutions. Without question the top digital next generation solution provider in the VAR space. I had the privilege of working with some of the most talented solution architects and technical sales people in the industry and encourage any of my readers to check them out if you’re looking for your next career move.

It’s onward to new challenges for me and I can’t wait to make a new home with the Veeam Dreeam Teeam!

Filed Under: Data Center, NetApp, Veeam

Need for Speed? NetApp Launches the EF570

September 22, 2017 By Adam Bergh 1 Comment

Two questions for you…

1. Do you like speed? 2. Do you want to pay a lot for it?

 

If your answers are 1. Hell Yes! and 2. Hell No! – today is your day.

Here comes NetApp’s newest All-Flash dragster – The EF570

Before we get into the specs on this new kit, let’s review a little bit of a history.

NetApp’s E and EF Series storage systems run an OS called ‘SANtricity’.  SANtricity has shipped with over a million systems for over 20 Years, and is the #1 ‘SAN only’ OS deployed in the world. In short, this is no new comer to the industry and is a rock-solid enterprise class hardware platform.

SANtricity differs from NetApp’s flagship OS ‘ONTAP’ in that it’s streamlined architecture optimized for:

  1. low-latency workloads
  2. big data analytics
  3. bare metal applications
  4. price/performance considerations
  5. highest bandwidth in very dense form factor.

With that being said, let’s get into the newest in the line up:

image

The EF570

The EF570 is the successor to the immensely popular EF560.

This newest system is rated for 1 Million 4k IOPS at .3ms of latency. That’s 300 MICRO seconds of latency – at ONE MILLION IOPS.

Oh, and how does 21 GBps of read throughput and and a total max capacity of 1.8PB work for you?

These numbers are up from about 850,000 IOPs at 800 micro seconds of latency and 12GBps of read throughput on the previous gen EF560. Not a bad bump.

I know what you’re thinking though, these are just marketing numbers, how about you show me an independent benchmark.

Take a look at NetApp’s SPC benchmarks below:

SPC-1 Benchmark Results

SPC-2 Benchmark Results

Guess who now holds the #1 spot all time in Price/Performance ratio in both SPC-1 and SPC-2 benchmarks?

Spoiler alert: It’s the EF570!

SPC-1 Results:

These show an incredible 500k SPC-1 IOPs with an overall response time of .26ms!

imageimage

SPC-2 Results:

The SPC-2 test is focused on throughput. Here you can see an incredible 21GBps throughput on a database query test!

image

More On Performance

The pace at which NetApp keeps ramping up the performance on the EF platform is pretty staggering. Check out this comparison graph on the history of the EF lineup on OLTP workloads:

imageimage

What else is new?

An all new HTML5 management interface that’s now easier than ever with SANtricity 11.4. image

New host interfaces: 100Gb NVMe over InfiniBand, 32Gb FC, 25Gb iSCSI, 12Gb SAS, 100Gb IB

Yes that’s right, NetApp now has NVMe front-end interfaces. More on this to come in a future blog post!

When Can I Get It?!?!

The new EF570 starts to ship this October and is available for order today.

Filed Under: Data Center, NetApp, Storage Tagged With: E-Series, EF, NetApp

Storage Field Day 12 – Day 3 Recap

March 21, 2017 By Adam Bergh 2 Comments

Day 3 is a wrap and I’m sad that’s SFD12 has finally come to an end. Quite the experience and day 3 didn’t disappoint. We arrived at Intel’s campus to meet with SNIA and Intel for our last day.

SNIA

If you don’t know who SNIA is or you’ve never heard of them, you’re probably not that deep in the industry. But for the uninitiated SNIA stands for  “The Storage Networking Industry Association” and is a “non-profit organization made up of member companies spanning information technology. A globally recognized and trusted authority, SNIA’s mission is to lead the storage industry in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement and security of information.”

Michael Oros, Executive Director at SNIA, gives an overview of the entire organization, including their technical focus for the foreseeable future. They are the leading body advancing industry storage standards, best practices, and testing.

Introduction to SNIA with Michael Oros

SNIA Hyperscaler Storage Development with Mark Carlson

Mark Carlson, Principal Engineer at SNIA, reviews the state of hyperscaler storage, and how the organization can better address this disruptive trend. These hyperscale organization are building their own storage arrays, rather than going to traditional storage vendors. By some measure, 50%of all bits shipped go to hyperscalers.

Intel

Last but not least, Intel was on hand to drop some serious technical details on the SFD12 delegates.

Jonathan Stern, Applications Engineer, Network Platforms Group, gives an overview of Intel’s Storage Performance Development Kit. He reviews the performance benefits of SPDK over using the standard Linux kernel. This is especially evident when it comes to VM optimization and hyperscale operations.

SPDK and the Future of Storage with Jonathan Stern

Tony Luck, Principal Engineer, SSG Enabling Group, reviews Intel’s Resource Director Technology, which can help with processor core resource management for software defined networking. He goes into a technical deep dive of what’s actually happening on the silicon to reach this core optimization, down to the L3 cache.

Intel Resource Director Technology (RDT) for Storage with Tony Luck

THAT’S A WRAP!

Thanks so much to Stephen Foskett, Kat Kitzmiller, Rich Stroffolino, and Megan Robinette for making SFD12 possible! I hope to be invited back again!

507

Filed Under: Data Center, Storage

Storage Field Day 12 – Day 2 Recap

March 21, 2017 By Adam Bergh 2 Comments

Day 2 of storage field day was absolutely massive. We brought out the big names today as we visited the mothership of of Nimble Storage, NetApp, and Datera!

Nimble Storage

We had the privilege of dropping by the Nimble Storage headquarters in Silicone Valley to be greeted by a giant banner welcoming us to #SFD12! That was nice of Nimble.

image

We arrived at Nimble at an interesting time. We had just learned about HP Enterprise’s intent to acquire Nimble Storage. The topic came up over breakfast of course, with much of the information being too new to share publicly. I can’t share much of what we heard, but let’s just say HP is really excited about having Nimble in the fold and has big plans for their tech.

Nimble Arrives In the Cloud

The HP news notwithstanding, Nimble had a big tech announcement to make: Nimble Cloud Volumes!

all flash arrays

At the 10,000 foot level the new Nimble cloud service is Nimble storage technology in a shared environment with a direct 1ms latency connection into AWS and Azure, with a brand new web based provisioning interface painted on top of it all.

Nimble believes standard hyperscaler storage is not redundant and resilient enough. In steps NCV to provided 6 9’s or data protection and rich data services you’ve come to know and love with Nimble. And they may be on to something as AWS just had a major outage.

image

image

Nimble Storage Cloud Volumes Overview with Gavin Cohen

(https://www.youtube.com/watch?v=RcS-CDaurZY)

Nimble Storage Cloud Volumes Demo with Sandeep Karmarkar

(https://www.youtube.com/watch?v=DnZ4w7ceegs)

InfoSight is Still King.

Rod Bagg, Vice President, Analytics & Customer Support at Nimble goes deep on the biggest driver of Nimble sales – InfoSight. If you’re not aware of Nimble’s InfoSight analytics platform you should be.

image

image

Nimble Storage InfoSight Predictive Analytics Overview with Rod Bagg

Nimble Storage InfoSight Demo with David Adamson

Nimble Storage Achieving Six-Nines Availability with Rod Bagg

Nimble Storage Docker Volume Plugin with Sakthi Chandra and Michael Mattsson

NetApp

As a member of the NetApp A-Team I always love to get to Sunnyvale to meet with the NetApp team to hear straight from the horse’s mouth what the big “N” has been cooking up. This visit didn’t disappoint as NetApp pulled out a few things that I didn’t even know they were working on.

After this visit one thing became abundantly clear, NetApp is “all-in: on their data fabric vision for the hybrid cloud.

Arthur Lent, VP, Chief Architect, reviews updates to NetApp’s Data Fabric. He puts the updates into the context of the problems of the modern enterprise, primarily the issue of data silos. Arthur reviews how over the last two years, the capabilities and versatility of Data Fabric has exploded.

What’s New with NetApp Data Fabric with Arthur Lent

Duncan Moore, Director, StorageGRID software, reviews the recent trend in object storage. He then uses this to pivot into a discussion of the ongoing gaps within object storage, particularly with unstructured data. He then reviews how StorageGRID can serve to extend NetApp’s Data Fabric through S3. Finally, Duncan reviews how StorageGRID scales to Webscale storage needs.

NetApp Object Storage and StorageGRID with Duncan Moore

And the highlight of the visit to NetApp, Dave Hitz, Executive VP and Founder of NetApp, further discusses the history of Data Fabric, and their overall cloud vision. He further reviews how the state of the company changed over the last two years, including the massive growth in their all-flash storage array.

Top of Mind Discussion with NetApp Founder Dave Hitz

Datera

Last but not least on day 2 we visited startup Datera, who’s promising High Performance elastic block storage, cloud-like agility on-prem to deliver operational simplicity.

First up we got meeting with Marc Fleischmann, CEO and Founder of Datera. He overviews their Elastic Data Storage, which is targeted toward on-premises clouds. Their overall mission is to bring data simplicity, agility and performance to on-prem clouds, and to allow for better data management across a hybrid cloud.

Datera Update with CEO and Founder, Marc Fleischmann

Ashok Rajagopalan, Head of Products, introduces their Elastic Data Fabric, which is designed to defragment the data center and improve utilization. This solution is flexible enough for any application, allows for mixed nodes within a on-premises cloud, provides for true scale-out, and allows for any orchestration stack.

Datera Elastic Data Fabric with Ashok Rajagopalan

Nic Bellinger, CTO and Co-Founder, goes to the whiteboard to review their distributed placement implementation within Datera’s solution. He reviews how Datera designed their solution around application requirements at a very high level. This makes the entire system built around constant change, allowing you to easily verify your placement map without disrupting IO.

How We Built Datera with Nic Bellinger, CTO and Co-Founder

Bill Borsari, Head of Systems Engineering, gives a preliminary demonstration of the Datera user interface. The system obscures the carefully constructed architecture behind an easy to consumer UI. He walks through the various views available to administrators, including active nodes, and overall system performance. Finally, he digs into how the system handle application performance on a policy level.

Datera Elastic Data Fabric Demo with Bill Borsari

Bill Borsari, Head of Systems Engineering, demonstrates using Datera with OpenStack. He shows created volumes in OpenStack being reflected in their GUI. This also allows for viewing tenancy, as well as volumes.

Datera Ecosystem Demo: OpenStack with Bill Borsari

Bill Borsari, Head of Systems Engineering, demonstrates Datera working with Kubernetes. With it, you can set a block storage volume for the container instances to use, which can be added to if additional workloads are created by Kubernetes.

Datera Ecosystem Demo: Kubernetes with Bill Borsari

446

Filed Under: Data Center, NetApp, Storage

  • 1
  • 2
  • 3
  • Next Page »

Connect with me:

  • LinkedIn
  • Twitter

A Little About Me…

Adam Bergh is a storage and virtualization expert - cloud computing junkie. You can follow him on twitter and via this blog for insights and opinions on the latest SAN, virtual data center and cloud technology.

Latest Tweets from @ajbergh

    Sorry, no Tweets were found.

Areas of Expertise:

Data Centers, VMware VSphere, NetApp SAN and NAS, Cisco UCS, Cisco Nexus, FlexPod, Disaster Recovery and Business Continuity Planning

Certifications:

VMWare VCP4/VCP5, VTSP, NetApp NCIE, NCDA, Cisco UCS, CCNA, MCSE, MCSE+Security, MCSA, MCSA+Security, MCP, CompTIA Security+, Compellent SAN

Copyright © 2023 · Adam Bergh - DataCenter and Cloud Computing Expert · Log in

  • felestore
  • Plugins Wordpress
  • Themes Wordpress
  • Documents Wordpress