Storage Unpacked Podcast show

Storage Unpacked Podcast

Summary: Storage Unpacked is a technology podcast that focuses on the enterprise storage market. Chris Evans, Martin Glassborow and guests discuss technology issues with vendors and industry experts.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Storage Unpacked Podcast
  • Copyright: Copyright © 2016-2021 Brookend Ltd

Podcasts:

 #119 – Storage Hardware is Back! | File Type: audio/mpeg | Duration: 30:06

This week, Chris and Martin catch up with some of the storage hardware on show at Flash Memory Summit 2019. FMS is essentially a trade show and so there are lots of technologies on display that haven’t quite made it to the enterprise. Technology vendors are selling to other vendors and this year there were some interesting hardware solutions to see. Flash has been on the agenda for the last decade and continues to push the boundaries of capacity and performance. Chris and Martin discuss 96+ layer products and the promise of 500 layers in the future. PLC or Penta-Level Cell NAND may be on the horizon but how would it be used? Could Optane (3D-XPoint) get cheaper and kill off new solutions like Z-NAND and XL-FLASH? The discussion moves on to look at computational storage, a subject we recently discussed with NGD Systems. There still needs to be some work done to make these technologies easily adoptable, such as trusted code and device management. Storage Acceleration looks to improve performance of traditional databases, with Xilinx and Pliops showing solutions. Finally, Martin and Chris discuss Burlywood and the optimising of SSDs. Storage hardware is definitely back! Vendors covered in this podcast: * Toshiba (XL-FLASH)* Samsung (Z-NAND)* NGD Systems (Computational Storage)* Xilinx & Pliops (FPGA acceleration)* Burlywood (SSD optimisation)* Liqid (Composable infrastructure) Elapsed Time: 00:30:06 Timeline * 00:00:00 – Intros – Friday 13th!* 00:00:44 – Flash Memory Summit 2019* 00:01:30 – Where is NAND Flash headed? 96 Layers!* 00:03:00 – The mythical 100TB drive* 00:03:56 – Is PLC (penta-level cell) a reality?* 00:06:00 – Intelligent flash drives could be on the way* 00:06:40 – Zee-NAND or Zed-NAND?* 00:07:54 – Could price declines in Optane kill of Z-NAND & XL-FLASH? * 00:09:15 – Compute and storage getting closer – computational storage * 00:10:40 – Compute on storage has been done before….* 00:12:14 – Who will write code for computational storage devices? * 00:15:10 – How will the codebase be managed across thousands of drives?* 00:16:20 – Storage Accelerator cards using FPGAs (Xilinx & Pliops) * 00:19:00 – Composable (e.g. Liqid) with accelerator cards* 00:20:50 – Are future storage startup exits likely to be via hyper-scalers?* 00:22:23 – Burlywood – SSD optimisation* 00:24:30 – Burlywood has so many acquisition choices….* 00:26:00 – Public cloud will be a rich seam of future storage podcasts* 00:28:04 – So are we seeing a renaissance in storage hardware?* 00:29:21 – Wrap Up Related Podcasts & Blogs * #117 – Introduction to Computational Storage with NGD Systems* #113 – The Expanding Storage Hierarchy with Erik Kaulberg* The Expanding Storage Hierarchy* What is Software Composable Infrastructure? Copyright (c) 2016-2019 Storage Unpacked.

 #118 – Pure Accelerate 2019 with Patrick Smith | File Type: audio/mpeg | Duration: 30:07

This week, Chris is in Austin, Texas at Pure Accelerate 2019. Prior to the event, Chris catches up with Patrick Smith, Field CTO for EMEA for Pure. Patrick moved to Pure after being a customer. The initial discussion covers why flash as a platform offers better reliability and consistency than traditional disk based storage. At Accelerate 2019, Pure announced FlashArray//C, DirectMemory modules and the GA of Cloud Block Store. Patrick previews these changes and what benefits customers can expect to see from these new products. Elapsed Time: 00:30:07 Timeline * 00:00:00 – Intros* 00:03:00 – What are customer experiences in moving to Flash?* 00:06:30 – Flash is about consistency as much as performance* 00:08:50 – Dedupe & compression helped justify flash costs* 00:09:55 – Flash removed significant portions of design and deploy headaches* 00:12:00 – The swing from generalists to specialists and generalists again* 00:13:34 – Flash also improves availability* 00:15:00 – Businesses wanted to be like hyperscalers* 00:17:00 – AWS has offloaded storage functionality to hardware 00:18:30 – Announcements – FlashArray//C* 00:21:55 – DirectMemory Flash for read performance acceleration 00:24:00 – CloudSnap extends to Microsoft Azure* 00:24:50 – Cloud Block Storage is now GA* 00:29:30 – Wrap Up Related Podcasts & Blogs * #51 – Pure Accelerate Pregame* FlashBlade 2.0 with Rob Lee* Pure Storage Microsite Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #8e6d.

 #117 – Introduction to Computational Storage with NGD Systems | File Type: audio/mpeg | Duration: 29:22

This week’s episode is the final recording from Flash Memory Summit 2019. Chris is joined by the team from NGD Systems for a discussion on Computational Storage. On this podcast are Scott Shadley, VP of Marketing, Nader Salessi, CEO and Ashok Savdharia who has the lovely title of “technologist”. Computational storage uses the processing power of modern SSD controllers to run application workloads directly on the SSD itself. Imagine each drive being a mini blade server, with compute, memory and storage. In the case of NGD Systems, this means a range of devices and form factors in a product family called Newport. Aside from the fascinating ability to run applications directly on an SSD, what could be really attractive about in-situ processing with computational storage is the ability to use the local compute capability with little or no power overhead. The team explains how this is achieved and explain some of the use cases that make computational storage devices an interesting area to watch, especially for edge computing. In the recording, Scott references the SNIA working group, which can be found online here – https://www.snia.org/computational. Elapsed Time: 00:29:22 Timeline * 00:00:00 – Intros* 00:01:30 – What is Computational Storage?* 00:02:40 – How real are the products & solutions today?* 00:03:55 – Form factors – what is available?* 00:07:30 – Running Containers & Linux for applications* 00:10:00 – Products should be as cheap as standard drives* 00:12:00 – ASIC than FPGA is a cheaper solution at scale* 00:13:30 – What applications run on a computational device?* 00:15:43 – Cluster file system – share the data with the host* 00:17:30 – Drives can run databases like MongoDB and TensorFlow * 00:20:00 – What manages workload processing across multiple devices?* 00:22:25 – Computational devices will have strong edge use cases* 00:26:00 – Customer creativity will generate new and interesting solutions* 00:28:00 – Where are the gaps?  What needs to be done? * 00:29:20 – Wrap Up Related Podcasts & Blogs * #57 – Storage on the Edge with Scott Shadley* #37 – State of the Storage Union with Chris Mellor* The Practicality of In-Situ Processing Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #d260.

 #116 – Fixing Gaps in Cloud Storage with Andy Watson | File Type: audio/mpeg | Duration: 22:54

This week, Chris chats with Andy Watson, CTO at WekaIO in another episode recorded live at Flash Memory Summit 2019. With the recent acquisition of E8 Storage and Elastifile by hyper-scale cloud companies, is this move an attempt to plug the gap in existing cloud storage offerings? Andy provides his opinion on the two acquisitions and how they might fit into the existing ecosystem. Even 1GB/s may not be enough throughput as we see more focus on ML/AI applications and using cloud-based GPUs and TPUs. Some applications expect data in both NFS and SMB format. That could be a challenge for some solutions. These purchases could be opportunistic and we may need to keep watching to see how cloud providers will move to the next level of storage performance. You can find out more about WekaIO at and follow Andy on Twitter at https://twitter.com/the_andywatson. Elapsed Time: 00:22:54 Timeline * 00:00:00 – Intros* 00:01:00 – Quick overview of WekaIO* 00:05:00 – E8 Storage & Elastifile acquisitions by public cloud providers* 00:07:10 – Object storage isn’t good for performance, block is not scalable* 00:08:20 – AWS SMB support was implemented via a separate feature* 00:10:39 – Some applications are inherently multi-protocol* 00:12:20 – 1GB/s isn’t enough performance to keep a GPU busy* 00:15:30 – Could latest purchases be opportunistic?* 00:18:00 – The public cloud will have greater file demands in the future* 00:21:55 – Wrap Up Related Podcasts & Blogs * #52 – An Introduction to WekaIO Matrix with Liran Zvibel* Can the WekaIO Matrix file system be faster than DAS?* WekaIO Presents at Storage Field Day 18* #92 – Introduction to Elastifile with Jerome McFarland* Disaggregated Storage Part II with Zivan Ori from E8 Storage Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #8818.

 #115 – The Transition to NVMe-oF Storage Solutions with Praveen Asthana | File Type: audio/mpeg | Duration: 24:17

This week, Chris catches up with Praveen Asthana from Exten Technologies in an episode recorded at Flash Memory Summit 2019. With NVMe and NVMe over Fabrics set to become the dominant technologies for public cloud and the enterprise, exactly how will the transition from current solutions occur? Praveen discusses the continuing need for centralised storage that can fully exploit the benefits of NVMe. This will need another transition in technologies in the same way flash created a whole new market of storage solutions some ten years ago. An interesting aspect of NVMe adoption is the way hyperscalers will use the technology. The need for NVMe in public cloud was demonstrated by the acquisition of E8 Storage by AWS earlier this year. Hyperscalers will use NVMe not just for speed but to optimise the use of hardware and reduce costs. Exten Technologies is developing solutions in software that allow the efficient use of hardware from a range of vendors. The architecture is designed to exploit micro-services with efficient processor utilisation. You can learn more about how Exten Technologies is approaching the NVMe market at https://exten.io. Elapsed time: 00:24:17 Timeline * 00:00:00 – Intros* 00:01:55 – Is the adoption of NVMe reflective of SAN 1.0?* 00:03:30 – Whatever happened to application-level resiliency?* 00:05:30 – The move to flash – retro fit and new solutions* 00:07:00 – NVMe solutions aren’t all end to end – some use SAS* 00:09:30 – Will NVMe force a build from scratch for vendors?* 00:11:00 – Will all applications need the performance NVMe offers?* 00:12:30 – Use cases – HPC, analytics* 00:14:00 – How will hyperscalers use NVMe solutions?* 00:16:35 – What does enterprise adoption look like?* 00:18:00 – EXTEN is developing a solution via software* 00:20:35 – Enterprises like appliances* 00:20:55 – Will NVMe drive new (unexpected) solutions?* 00:23:30 – Wrap Up Related Podcasts & Blogs * What is Software Composable Infrastructure?* Performance Analysis of SAS/SATA and NVMe SSDs* #89 – Choices in NVMe Architectures Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #C7A0.

 #114 – CIO Storage & Data Challenges with Clay Ryder | File Type: audio/mpeg | Duration: 37:13

This week, Chris catches up with Clay Ryder from the DCS (Data Centre Systems) group at Western Digital. The discussion focuses on the challenges for today’s CIO, with an emphasis on storage and data. The division of responsibility between CTO and CIO isn’t always obvious. As we learn in this discussion, there’s a lot of overlap and a lot of interaction between the two. CIOs are focused on data and more specifically information, whereas the CTO has to deliver the right platform to meet application needs. The conversation covers challenges around technical debt, determining what data to retain and what to discard, as well as how the public cloud should be used. Clay describes the idea of a “golden copy” of data, sitting on-premises, with cloud rented for compute. Finally, the discussion closes with a view on whether the modern CIO has more or less to worry about than 20 years ago. Is it just more of the same? You can find out more information about Western Digital online at https://www.westerndigital.com/ and in particular the blogs Clay references at https://blog.westerndigital.com/. Elapsed Time: 00:37:13 Timeline * 00:00:00 – Intros* 00:01:10 – How do CIO & CTO responsibilities differ?* 00:01:54 – Information or Data – what’s the difference?* 00:03:00 – What are CIO top concerns – security* 00:04:30 – Technical debt, worse in IT than other industries* 00:10:13 – Re-using data for future value, what are the challenges?* 00:14:00 – Automation will be needed to manage data sources* 00:15:30 – Is public cloud the solution for everything IT?* 00:20:00 – The disadvantage of opex-based charging* 00:24:00 – Creating a golden copy on-premises* 00:25:46 – Is the CIA listening?* 00:27:02 – Should CIOs care about technologies like NVMe?* 00:32:00 – Unknown unknowns, understanding what’s possible* 00:34:00 – Are modern CIO challenges the same as they ever were?* 00:36:03 – Wrap Up Related Podcasts & Blogs * #62 – The Future of Data Infrastructure with Scott Hamilton* Western Digital Redefines DRAM Caching* What is Software Composable Infrastructure? Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #BB2D.

 #113 – The Expanding Storage Hierarchy with Erik Kaulberg | File Type: audio/mpeg | Duration: 25:55

This week’s episode is the first in a series from Flash Memory Summit 2019. Chris catches up with Erik Kaulberg from INFINIDAT to discuss how the expanding hierarchy of solid-state storage media will drive new products and solutions in the future. The topics of conversation cover how media has diversified, with new solutions like MRAM and 3D-XPoint. At the same time, we hear that 5-level NAND could be a reality. How are vendors combining these products with traditional storage? Will the ability to learn from the field with the deployment of traditional storage designs, provide an advantage over SDS? Will vendors selling both hardware and software solutions have an advantage over those only playing in one market? These are just some of this week’s topics of conversation. Elapsed Time: 00:25:55 Timeline * 00:00:00 – Intros* 00:01:50 – What changes are we seeing in the media industry?* 00:04:00 – How does solid state media expansion parallel HDDs?* 00:07:00 – New flash solutions will not necessarily improve performance* 00:08:00 – It’s not tiering, it’s new techniques to rebalance active data* 00:10:00 – Storage needs to get simpler, not more complex over time* 00:13:00 – How will benchmarks work in the future, are they done?* 00:14:20 – How is INFINIDAT using a mix of media?* 00:16:00 – VAST Data – good example of new media in new systems* 00:17:00 – Will storage systems kill off SDS?* 00:21:30 – Where will the solid-state market go?  More products?* 00:23:30 – Will we see new a new generation of hardware platforms? * 00:25:00 – Wrap Up Related Podcasts & Blogs * #105 – Introduction to VAST Data (Part I)* #89 – Choices in NVMe Architectures* #82 – Storage Predictions for 2019* 2019 is the year of NVMe* Will TCO Drive Software Defined Storage? Erik’s Bio Erik Kaulberg is a Vice President at Infinidat (https://infinidat.com), leading cloud strategy including the Neutrix Cloud storage service and FLX private cloud business model, key alliance partnerships like VMware and Cisco, and analyst relations. He has broad expertise in enterprise storage and frequently engages key customers, partners, and analysts. Erik previously ran worldwide enterprise storage strategy and business development for IBM, after he sold all-flash array innovator Texas Memory Systems to the company. Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #6C8C.

 #112 – Introduction to NetApp Data Availability Services (NDAS) | File Type: audio/mpeg | Duration: 31:51

NetApp has developed a new backup service called NDAS or NetApp Data Availability Services. NDAS is based in the public cloud and provides the ability to run analytics against secondary data in AWS S3, without having re-hydrate through a backup platform. Chris met with Charlotte Brooks (Technical Marketing Engineer) and Joel Kaufman (Director of Technical Marketing) to discuss how NDAS works and what customers are doing with their cloud-based data. NDAS provides some interesting features that make the product slightly different from existing backup software. The solution runs in AWS using the customer’s account. This ensures data is secure, but also provides the ability to access data directly and run other services (like analytics) against the backup data. NDAS uses a feature called libC2C to directly access data in S3 as if it was a file system. This self-describing capability means data can be accessed long after the backup software has been shut down, or even decommissioned. You can read more on NDAS in this blog post that also links to some great Tech Field Day presentations explaining NDAS in more detail. Elapsed Time: 00:31:51 Timeline * 00:00:00 – Intros * 00:01:00 – What happened to Joel’s beard? * 00:02:00 – NetApp Data Availability Services – NDAS * 00:05:00 – NDAS is an enabler for getting data into the public cloud * 00:06:30 – Full SaaS or run in the customer’s own AWS account? * 00:08:05 – NDAS is deployed through the AWS Marketplace * 00:10:28 – Today just AWS, possibly other clouds and on-prem in the future * 00:12:20 – What do we mean by self-describing data? * 00:16:00 – Scanning file systems isn’t efficient – how is metadata indexed? * 00:17:44 – What else can customers do with this secondary data? * 00:20:00 – IT relevance – building an internal catalogue of services * 00:23:00 – How are customers using NDAS data? * 00:29:39 – Any teasers for futures? * 00:31:16 – Wrap Up Related Blogs & Podcasts * Exploiting Secondary Data with NDAS from NetApp* #91 – Storage Field Day 18 in Review Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #4E81.

 #111 – The Cohesity Marketplace with Rawlinson Rivera | File Type: audio/mpeg | Duration: 31:11

This week Chris is in Silicon Valley and catches up with Rawlinson Rivera, Field CTO at Cohesity. The company recently released a new feature called Marketplace that enables customers to run data-focused applications directly on the Cohesity platform. The idea of running applications on data protection hardware has some benefits and potential disadvantages. Naturally, the focus is to provide a single point of truth for secondary data, reducing the risk of having many teams and departments storing their own data copy. But is DataPlatform capable of delivering the performance requirements of AI and ML? Rawlinson takes us through some of the company strategy and sets the scene for what could be coming up in the future. We could speculate that Marketplace could run on public cloud and even be a profit centre for the business. We will have to wait and see. You can find more details on Marketplace at the Cohesity website. Here’s a link to the Tech Field Day website where you can find a great introduction to Marketplace. Elapsed Time: 00:31:11 Timeline * 00:00:00 – Intros* 00:01:20 – Background, what is the Cohesity DataPlatform?* 00:04:00 – Perhaps “secondary” isn’t the right term.* 00:05:30 – What is the Cohesity Marketplace?* 00:09:01 – What applications does the platform run?* 00:11:24 – How is useful data extracted from virtual machines?* 00:14:00 – Are the marketplace applications free?* 00:17:09 – Marketplace enables application consolidation* 00:18:00 – Marketplace could become a profit centre* 00:19:40 – DataPlatform is totally API driven, apps via SDK* 00:23:00 – Is the process of managing apps a scalable one?* 00:25:29 – What examples are there of customer use cases?* 00:26:41 – How does Imanis Data fit into the platform?* 00:28:44 – Wrap Up Related Podcasts & Blogs * #104 – Creating a Data Management Strategy with Paul Stringfellow* #81 – Storage or Data Asset Management?* The $2 billon Gamble on Data Management* The Need for APIs in Storage and Data Protection Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #E512.

 #110 – Storage Vendor Consolidations & Acquisitions | File Type: audio/mpeg | Duration: 34:12

We’ve started to see the consolidation of storage vendors as some startups and long-term players in the market get acquired. Is the reason for this buying spree one of positive growth, or a defensive position to maintain survival? Chris and Martin discuss the issues and the vendors doing the buying. Who’s been buying? Violin Systems acquired part of X-IO (specifically the ICE) products as that company changed focus to their edge device (Axellio). DDN acquired Tintri and Nexenta. StorCentric, founded from Drobo and Nexsan has acquired Retrospect and Vexata. Are we seeing a move to a new style of holding company? Maybe these vendors are keeping assets warm, either as part of OEM agreements or for future acquisition. Elapsed Time: 00:34;14 Timeline * 00:00:00 – Intros 00:01:28 – Vendor acquisitions – there’s a lot of them!* 00:02:09 – Why are acquisitions happening? * 00:04:20 – Violin Systems acquires (part of) X-IO * 00:08:53 – DDN – Tintri, Nexenta * 00:13:50 – StorCentric – Drobo, Nexsan, Retrospect and… Vexata? * 00:18:16 – Building a portfolio or standalone products * 00:20:45 – Or perhaps a new style of VC? * 00:24:09 – Are we in a storage consolidation phase? * 00:26:05 – What is the long-term strategy for these companies? * 00:30:47 – Are there any really bad products? Or just unlucky? * 00:32:47 – Wrap Up Related Podcasts & Blogs * #64 – Success & Failure in Storage Startup Land* Garbage Collection – All-flash Market Consolidation Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #AEFA.

 #109 – An Overview of ObjectEngine with Brian Schwarz | File Type: audio/mpeg | Duration: 15:56

In this episode, Chris talks to Brian Schwarz, VP of Product Management for FlashBlade and ObjectEngine at Pure Storage.  ObjectEngine is a scale-out de-duplication engine that efficiently writes data to either FlashBlade or public cloud object stores.  The solution developed from the acquisition of StorReduce in 2018. ObjectEngine was conceived when Pure Storage observed customers using FlashBlade for backup data.  The FlashBlade platform was originally developed for high-performance file-based applications like analytics.  De-duplication wasn’t integrated natively as an initial design decision.  Combining ObjectEngine with FlashBlade enables space saving ratios of around 8:1 or greater. You can learn more about FlashBlade, ObjectEngine and other Pure technologies at Pure Accelerate.  The event will be held in Austin this year, between 15-18 September 2019.  You can follow Brian online at https://twitter.com/theschwarzbwthu (which we think is a Star Wars reference). Elapsed Time: 00:15:56 Timeline * 00:00:00 – Intros * 00:00:45 – StorReduce was acquired 3Q2018 * 00:01:30 – Where was the need for ObjectEngine? * 00:03:45 – FlashBlade was designed for low impact of CPU & Memory * 00:04:25 – How is ObjectEngine packaged and sold? * 00:07:40 – Has ObjectEngine driven new or existing customer engagement? * 00:09:17 – ObjectEngine is S3 object API on the input and output * 00:10:20 – How will ObjectEngine look in the public cloud? * 00:13:20 – What is the impact of encryption with ObjectEngine? * 00:14:55 – Pure Accelerate 2019 – This year in Austin Related Podcasts & Blogs * #51 – Pure Accelerate Pregame * Soundbytes #008: FlashBlade 2.0 with Rob Lee at Pure Accelerate * Pure Storage ObjectEngine for Flash-based Backup Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode #3193.

 #108 – Druva Cloud-Native Data Protection with Curtis Preston (Sponsored) | File Type: audio/mpeg | Duration: 38:50

In this week’s episode, Chris talks to W. Curtis Preston.  Curtis is a long-time and well-known industry expert in the backup area and now Chief Technologist at Druva.  Data protection in a multi-cloud world introduces new challenges compared to traditional on-premises backup.  As a result, Druva has developed a cloud-native platform that protects on-premises, cloud, endpoint and SaaS applications. What does cloud-native actually mean?  Chris and Curtis discuss the benefits of using native AWS public cloud services like S3, DynamoDB, RDS and EC2 instances.  Compared to on-premises backup, where hardware is procured to meet high watermark peaks, public cloud data protection can be scaled on demand.  This allows Druva to support customers with a cost model more closely aligned to actual usage. What are the challenges of running purely from public cloud?  Clearly there are issues with network bandwidth and getting that first backup in place.  Druva uses Snowball Edge devices to both seed first backup and will support disaster recovery via Snowball devices in the future. To learn more about Druva, visit www.druva.com.  To follow Curtis on Twitter, follow @wcpreston or log onto Curtis’ data protection portal, www.backupcentral.com where you can find Curtis’ new podcast. Elapsed Time: 00:38:50 Timeline * 00:00:00 – Intros * 00:01:30 – Understanding Cloud-native data protection * 00:02:09 – What data protection offers does Druva offer? * 00:04:00 – Don’t forget to backup your SaaS data! * 00:04:50 – Lift and shift or build backup natively for the cloud * 00:06:40 – Dynamic scaling of AWS resources like EC2 * 00:08:28 – DynamoDB, RDS, S3, EC2 all Druva cloud components * 00:09:30 – Charging is based on deduplicated backup data stored. * 00:10:22 – Using Snowball Edge for “first backup” transfer * 00:11:20 – Is there an issue running in a single Druva account? * 00:13:20 – How does customer geography affect data backup? * 00:14:58 – Druva create a blueprinted build to support new AWS regions * 00:16:37 – Who pays for bandwidth and egress charges? * 00:19:00 – Giant servers, giant tape libraries, giant networks! * 00:20:20 – Druva does source-side deduplication to reduce network traffic * 00:22:55 – How are multiple cloud providers supported? * 00:24:30 – How are full server and entire data centre restores achieved? * 00:27:00 – Druva performs “surgery” on backup images to convert to AMI * 00:29:36 – DR is the perfect workload for the public cloud * 00:32:00 – Endpoint analytics are already in place – e.g. ransomware * 00:34:00 – With SaaS, the charging process is already in place for new features * 00:36:00 – Approaching a $100m run rate * 00:37:30 – Wrap Up Related Podcasts & Blogs * Talking Cloud Data Protection with Jaspreet Singh * Cloud Field Day 3 Preview: Druva * The Three Facets of Backup Curtis’ Bio W. Curtis Preston, also known as Mr. Backup.

 #107 – Should IBM Quit the Storage Hardware Business? | File Type: audio/mpeg | Duration: 26:49

IDC recently released their latest quarterly storage sales figures.  The data shows, yet again, that IBM sales continue to decline.  In this week’s podcast, Chris and Martin discuss the state of IBM’s storage business.  Is it time for IBM to quit? IBM has an embarrassment of riches in storage software and hardware (or a nice portfolio as Martin puts it).  Many of these solutions have evolved from other technology, like SVC and XIV.  With the acquisition of Red Hat, IBM customers will have even more storage choice.  Does this mean more flexibility or confusion? Re-using technology is not a bad thing (although it’s worth checking out the wiring in the image of the A9000 on this post.  However, with so many competing solutions, IBM could be in danger of confusing customers and failing to capitalise on the opportunities of integration with other platforms. For more details on the latest IDC figures, see this post.  The graph showing IBM sales is included here for reference. Elapsed Time: 00:26:48 Timeline * 00:00:00 – Intros * 00:00:50 – IDC storage data shows IBM in revenue decline * 00:02:30 – Is IBM moving or losing their storage customers? * 00:04:00 – What is IBM’s storage portfolio? * 00:06:02 – What could be rationalised in the portfolio? DS8000? * 00:08:49 – Does IBM have an issue with the marketing message? * 00:10:00 – Could IBM have moved a lot of revenue to SDS? * 00:12:30 – ESS back from the dead! * 00:13:20 – But Red Hat (an IBM company) also has storage products! * 00:16:54 – Spectrum Scale supports containers – who knew? * 00:18:20 – Frictionless adoption – is that a problem for IBM? * 00:20:20 – Is the storage hardware sales metric still fit for purpose? * 00:21:30 – What do we recommend for IBM? * 00:24:00 – Simplify!  Oh and don’t forget about tape. * 00;25:00 – A “capacity under licence” measure would be better. * 00:26:00 – Wrap Up Related Podcasts & Blogs * IDC 1Q2019 storage data shows a tough market for appliance vendors * #43 – All-flash Market Review 2018 with Chris Mellor * IBM Buys Red Hat * IBM Reuses Existing Tech in Latest FlashSystem Release Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 12B0.  

 #106 – Introduction to VAST Data (Part II) with Howard Marks (Sponsored) | File Type: audio/mpeg | Duration: 21:18

In this second episode on VAST Data, Chris and Martin continue the discussion with Howard Marks.  You can find the previous episode at #105 – Introduction to VAST Data (Part I).  This time, the conversation continues where the discussion left off, with Howard finishing the explanation of wide striping. To explain exactly how data is accessed on the platform, Howard introduces the concept of v-trees.  These are like b-trees but flatter and wider.  The v-tree is used to hold both metadata and data.  One interesting aspect of the discussion is in understanding exactly how Optane and QLC are used.  As data is written, Optane acts as a write cache but doesn’t need to immediately be flushed, as would happen in traditional systems.  Instead, the VAST platform can take time to destage to QLC when appropriate, adding time for data reduction tasks to take place. Howard wraps up the conversation with some detail on use cases, which at this stage are focused on “data intensive” applications.  In time this will expand to meet traditional enterprises too.  Roadmap items include SMB, snapshots and replication. You can find more information the VAST Data platform at www.vastdata.com.  Follow VAST Data and Howard on Twitter. Elapsed Time: 00:21:18 Timeline * 00:00:00 – Intros * 00:00:43 – Continuing the data layout discussion * 00:02:58 – V-Trees – metadata and data * 00:05:00 – Data placement based on type, gaming the system? * 00:08:40 – What performance can customers expect? * 00:11:00 – Does it make sense to add block protocols to VAST Data? * 00:12:30 – What are the deployment models, SDS, appliance? * 00:15:00 – Still evolutionary, Martin? Revolutionary pricing? * 00:16:48 – Now we know the technology, what are the use cases? * 00:19:30 – Wrap up Related Blog Posts & Podcasts *  #105 – Introduction to VAST Data (Part I) with Howard Marks * VAST Data launches with new scale-out storage platform Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode FC05.  

 #105 – Introduction to VAST Data (Part I) with Howard Marks (Sponsored) | File Type: audio/mpeg | Duration: 30:49

This week, Chris and Martin talk to Howard Marks, Chief Storyteller at VAST Data.  You may know Howard as an independent analyst and author for a range of online publications.  Howard recently joined VAST to help explain and promote understanding of their data platform architecture. The VAST Data platform uses three main technologies that have only recently emerged onto the market.  QLC NAND flash provides long-term, cheap and fast permanent storage.  3D-XPoint (branded as Intel Optane) is used to store metadata and new data before it is committed to flash.  NVMe over Fabrics provides the connectivity between stateless VAST front-end servers and JBOF disk shelves. The architecture has some very subtle differentiated points that allow the solution to be scale-out and highly efficient.  The server components are stateless because metadata isn’t cached locally. That removes issues of cache coherency and keeping all metadata synchronised.  3D-XPoint allows data to be written as huge stripes with as little as 3% overhead on large systems. If you want to learn more about the VAST platform, check out https://www.vastdata.com, read our blog on the VAST technology or visit the Tech Field Day website where you’ll find more in-depth videos from the founders of the company. Elapsed Time: 00:30:49 Timeline * 00:00:00 – Intros * 00:02:00 – Who is Howard Marks? * 00:02:30 – Who are VAST Data? * 00:04:00 – Do we need hyper performance or good enough? * 00:07:30 – Three technologies – QLC NAND & Optane * 00:09:30 – Intelligent JBOFs * 00:11:50 – Take a breather Howard!  Let’s review! * 00:13:30 – Server components are stateless containers * 00:14:50 – Shared nothing? No DASE – Shared Everything * 00:17:40 – Why does Persistent Memory allow scale-out? * 00:19:34 – Wide stripe optimisation with erasure coding * 00:21:00 – Optimised deduplication with similarity hashing * 00:24:00 – Wide stripes with sequential I/O improves endurance for flash * 00:26:00 – It’s a log-based file system (not WAFL) * 00:29:00 – 10 year guarantee on QLC drives Transcript   Related Podcasts & Blogs * VAST Data launches new scale-out storage platform * QLC NAND – What can we expect from the technology? * What are TBW & DWPD? Howard’s Bio Howard Marks is VAST Data’s Technologist Extraordinary and Plenipotentiary helping customers realize the advantages of Universal Storage. Before joining VAST Howard spent 40 years as an independent consultant and storage industry analyst at DeepStorage. He is a frequent, and highly rated speaker at industry events and Storage Field Day delegate. Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode FC04.

Comments

Login or signup comment.