Storage Unpacked Podcast show

Storage Unpacked Podcast

Summary: Storage Unpacked is a technology podcast that focuses on the enterprise storage market. Chris Evans, Martin Glassborow and guests discuss technology issues with vendors and industry experts.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Storage Unpacked Podcast
  • Copyright: Copyright © 2016-2021 Brookend Ltd

Podcasts:

 #59 – Ethernet vs Fibre Channel | File Type: audio/mpeg | Duration: 27:57

This week’s podcast looks at storage networking and in particular the choice of using Ethernet vs Fibre Channel as the network protocol.  Traditionally, enterprise storage platforms have been Fibre Channel connected, with only a small amount of iSCSI usage.  However, the world isn’t just block storage and in fact, as guest Marty Lans (General Manager, Storage Connectivity Engineering & Global Interoperability Business Unit at HPE) tells us, 80% of storage is Ethernet connected.  This is because of the growth in unstructured data stored on NAS and object stores. Ethernet storage now includes lossless Ethernet (DCB) and has done for some time.  Ethernet is faster than Fibre Channel in raw throughput, yet Fibre Channel continues to see modest growth, according to Lans.  The next wave of storage networking is coming along with the introduction of NVMe.  This will bring FC-NVMe and NVMe-oF protocols, both improving performance for shared storage.  So is it the end for Fibre Channel, should the default be to move to Ethernet?  The answer isn’t that clear cut, as Marty explains. Elapsed Time: 00:27:57 Timeline * 00:00:00 – Intros * 00:01:30 – Summarising the storage networking landscape * 00:02:35 – 80% of all storage is attached via Ethernet * 00:04:00 – Lossless Ethernet and DCB. * 00:06:00 – Why use Ethernet instead of Fibre Channel? * 00:07:00 – HPE doesn’t see the Fibre Channel business shrinking. * 00:08:50 – Whatever happened to Fibre Channel over Ethernet? * 00:12:00 – Cloud service providers – all Ethernet, surely? * 00:13:00 – NVMe, FC-NVMe and NVMe-oF * 00:18:00 – How have form factors changed, TOR switches? * 00:20:00 – Is there a storage equivalent of SDN? * 00:23:50 – Should customers be actively moving away from FC? * 00:25:00 – Where does storage networking go next? Related Podcasts & Blogs * #57 – Storage on the Edge with Scott Shadley * #55 – Storage for Hyperscalers * #47 – Enterprise Storage is Not Boring * Garbage Collection #005 – Disaggregated Storage * NVMe over Fabrics – Caveat Emptor? * The Race towards End-to-End NVMe in the Data Centre * One (Storage) Protocol to Rule Them All? Marty’s Bio Marty Lans (General Manager, Storage Connectivity, HPE Complete, Storage Qualification & Interoperability Engineering) joined HPE in 2013 and is responsible for innovating and developing next generation storage connectivity technology, implementing third party ecosystem partnerships with HPE Complete and storage qualification and interoperability engineering. Prior to joining HPE, Marty had more than 20 years of storage and networking industry experience with executive roles at leading technology companies including EMC, Cisco Systems, Brocade Communications and Juniper Networks. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode I5FC.

 #58 – Storage Vendor Hero Numbers | File Type: audio/mpeg | Duration: 20:48

This week, Chris Mellor is back and the team take a dive into the subject of storage vendor hero numbers.  What are hero numbers?  We’ve all seen them, they’re the huge performance figures quoted by all-flash storage vendors aimed at putting their products forward in the best light possible. Are hero numbers believable or should we be looking at certified vendor benchmark testing as a guide to capability?  Do users even look at benchmark or hero numbers in the first place?  Could the whole exercise be a waste of time?  The conversation moves to talk about STAC (Securities Technologies Analysis Center) and whether more application specific benchmarks have greater value.  This includes the approach Pure Storage appears to have taken with FlashArray and FlashBlade, only quoting application-specific testing like that of AIRI. As the discussion moves on, the team talk about more consistent deterministic latency (more on that here) and whether public cloud service providers will ever subject their service offerings to benchmark testing (TL;DR, probably not).  As a final wrap up, the team asks, what can customers actually do to ensure they are seeing the real truth behind benchmark claims. Elapsed Time: 00:20:48 Timeline * 00:00:00 – Intro * 00:01:00 – What are vendor hero numbers – ten meeelion IOPS!! * 00:02:00 – But certified benchmarks are expensive to perform. * 00:04:00 – Storage arrays aren’t interesting, do hero numbers help? * 00:05:00 – Do users even care about industry standard benchmarks? * 00:06:00 – STAC – financial application benchmarks * 00:07:48 – Should vendors be more realistic about hero numbers? * 00:09:50 – Why has Pure Storage stopped producing FlashArray performance data? * 00:11:40 – However Pure do produce numbers for CI like AIRI (see Soundbytes #008). * 00:14:30 – Do people do proofs of concept any more? * 00:16:00 – Maybe arrays are massively over specified anyway? * 00:16:30 – Or actually, it’s about consistent deterministic performance. * 00:17:44 – Do we need performance benchmarks for cloud vendors? * 00:19:00 – So what should customers do? * 00:20:08 – Wrap Up Related Podcasts & Blogs * Pure Storage AIRI – CI for AI * Why Deterministic Storage Performance is Important * Avoiding the Storage Performance Bathtub Curve * Avoiding All-Flash Missing IOPS * Are DataCore’s SPC Benchmarks Unfair? * NFS Vendor Benchmarks – Caveat Emptor * Storage Performance: Why I/O Profiling Matters Vendors referenced in this post: Violin Systems, Dell EMC (PowerMax), Pure Storage (FlashBlade & FlashArray), NetApp. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode UGE3.

 #57 – Storage on the Edge with Scott Shadley | File Type: audio/mpeg | Duration: 28:57

What is computational storage and how do we manage storage on the edge of the network?  This week’s conversation is a discussion on how storage and data is managed within IoT devices and partially covers the new segment of computational storage.  Martin and Chris talk to Scott Shadley, VP of Marketing for NGD Systems. Edge storage means more than simply managing the data on a Raspberry Pi or other small form-factor device.  During the podcast, Scott uses the example of aircraft analytics, which generate terabytes of content per flight.  How do these implementations work?  How will data be secured and safely accessed in an architecture where none of the devices are in the confines of the data centre? At this point it seems like there aren’t any real standards for distributing and managing code, or tracking the concurrency of content consolidation.  However with new bodies like the Open Fog Consortium forming, we can expect this sector to develop and mature. Elapsed Time: 00:28:57 Timeline * 00:00:00 – Intros * 00:01:00 – What is Edge computing and what are the storage needs? * 00:02:00 – Didn’t we already have distributed computing? * 00:04:30 – Why process on the edge at all? * 00:05:30 – A real example – aeroplane sensor/analysis data. * 00:09:00 – What are the implementation models? * 00:10:40 – What is Fog Computing? * 00:14:00 – How would distributed data be secured? * 00:15:30 – EFTPOS – the SEN room? * 00:17:30 – What type of code could actually be run on the edge? * 00:20:40 – What standards exist for code distribution/management? * 00:21:50 – Data management – how do we protect content? * 00:23:50 – How would a computational storage device work? * 00:26:45 – Where is the market for computational storage headed? * 00:27:56 – Wrap up! Related Podcasts & Blogs * #36 – The Persistence of Memory with Rob Peglar * #37 – State of the Storage Union with Chris Mellor * Garbage Collection #006 – LTO-8 and the Future of Tape * The Practicality of In-Situ Processing Scott’s Bio Scott Shadley, Storage Technologist and VP of Marketing at NGD Systems, has more than 20 years of experience with Storage and Semiconductor technology. Working at STEC he was part of the team that enabled and created the world’s first Enterprise SSDs. He spent 17 years at Micron, most recently leading the SATA SSD product line with record-breaking revenue and growth for the company. He is active on social media, a lover of all things High Tech, enjoys educating and sharing and a self-proclaimed geek around mobile technologies. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode GI05.

 #56 – Defining Scale-out Storage | File Type: audio/mpeg | Duration: 19:38

This week’s podcast is a conversation between Martin and Chris, talking about how we define scale-up and scale-out storage.  A recent discussion on Twitter about Pure1 and the idea of federated scale-out generated some interesting feedback, so we thought it might be good to get some definitions in place. The opening discussion talks about how scale-up and scale-out should be defined and what definitions of scale-out exist.  Volume managers used to be the old-school way of implementing federation, as was storage virtualisation.  So perhaps federation is a genuine use case. Martin and Chris move to talk about NAS and object, then reflect on how the cloud computing providers implement their storage platforms.  Surely these are scale-out?  The truth is, that level of detail is generally kept secret.  Finally, the conversation talks about one of the biggest storage headaches – rebalancing. Elapsed Time: 00:19:38 Timeline * 00:00:00 – Intros * 00:01:00 – Scale-out versus scale-up – how do we define them? * 00:02:30 – tightly-coupled, loosely-coupled, federated scale out. * 00:03:30 – So what solutions are scale-up? * 00:05:00 – Is federated storage a fair version of scale-out? * 00:07:00 – Does scale-up have a value with very large drive types? * 00:07:30 – Volume Managers – old school federation * 00:09:30 – What about NAS and object store solutions? * 00:12:00 – How does public cloud storage implement scale-out? * 00:15:00 – The biggest storage management issue – rebalancing. * 00:18:20 – So should we care about scale-out or scale-up? * 00:19:15 – Wrap Up Vendors mentioned in this podcast: Nimble, SolidFire, NetApp, Pure Storage, Tintri, Dell EMC and IBM. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode J879.

 #55 – Storage for Hyperscalers | File Type: audio/mpeg | Duration: 27:54

This week we talk to Mark Carlson, co-chair of the SNIA Technical Council, about the storage needs of hyperscalers.  Mark defines hyperscalers as those companies opening multiple data centres a year, most notably Amazon, Microsoft Azure, Google and Facebook in the US and Baidu, Alibaba and Tencent in China.  These vendors are deploying petabytes of storage a year, with specific requirements on storage media.  The hyperscaler applications have issues with HDD and SSD performance characteristics, such as tail latency and the effects of garbage collection.  As a result, drive manufacturers are building in new features to meet the needs of these companies. What exactly are the issues?  The conversation covers some of the typical problems, like reusing existing partially failed media, the problems caused by tail latency, write amplification and why non-deterministic latency is such an issue.  Some vendor solutions include Depop, Fast Fail and Open Channel.  Not sure what these are?  Listen in to find out! In the podcast, Mark mentions a white paper on hyperscaler storage.  You can find it here (PDF).  The SNIA YouTube channel can be found here.  Mark mentions his presentation at Tech Field Day, which can be found here. Elapsed Time:  00:27:54 Timeline * 00:00:00 – Intro * 00:01:30 – Who do we mean by hyper-scalers? * 00:02:30 – ODM (Not OEM) – Original Design Manufacturers. * 00:04:00 – Aren’t HPE and Dell selling to hyper-scalers? * 00:05:30 – Why do hyper-scalers need different storage media? * 00:07:00 – Differences of approach to resiliency compared to Enterprise. * 00:09:00 – Don’t we want to have partial device failures? * 00:10:20 – Depop – depopulate and reset drive to factory settings. * 00:11:30 – Back to tail latency – extended response times. * 00:12:30 – NVM Sets to logically partition a drive for each application. * 00:13:40 – Fast Fail for HDD reads. * 00:15:00 – putting drives into a deterministic window – putting maintenance off. * 00:16:30 – Open Channel approach – let the host do the maintenance work. * 00:17:30 –  How are changes introduced?  ECNs and Technical Proposals. * 00:22:00 – Will hyper-scaler features filter down to the enterprise? * 00:26:00 – Wrap Up Mark’s Bio Mark A. Carlson, Principal Engineer, Industry Standards at Toshiba, has more than 35 years of experience with Networking and Storage development and more than 20 years experience with Java technology. Mark was one of the authors of the CDMI Cloud Storage standard. He has spoken at numerous industry forums and events. He is the co-chair of the SNIA Cloud Storage and Object Drive technical working groups, and serves as co-chair of the SNIA Technical Council. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode HIJ4.  

 #54 – Are we at All-flash and HDD Array Price Parity? | File Type: audio/mpeg | Duration: 25:58

This week, Chris and Martin discuss the subject of price parity between all-flash and hybrid or HDD-based storage arrays.  From Martin’s perspective, he is starting to see vendor pricing getting close to parity and it making more sense to buy all-flash than a spinning media device.  However, what are the issues?  TCO is one – it’s not all about array pricing.  Also, flash offers more opportunity to be Opex focused, as it’s easier to increment flash in an array than adding many disks to maintain performance. So what about hybrid?  Does it have a position, if the marketing is going all-flash for traditional workloads and all-HDD for archive and unstructured data?  The conversation closes on questioning whether vendors offering rental rather than buy makes sense and whether the industry can afford not to go all-flash. During the podcast, we mention Chris Mellor’s article on flash revenue.  You can find it here.  We also talk about the Micron 5120 QLC SSD.  You can find some thoughts on that product here.  Finally, we talk about the 40-year anniversary Intel x86 processor.  You can find more details here. Elapsed Time:  00:25:58 Timeline * 00:00:00 – Intros * 00:01:00 – Are we at price parity between all-flash and disk-based systems? * 00:01:30 – What’s being pushed out?  15K, 10K drives? * 00:02:30 – On what basis are we seeing price parity, raw, usable/effective? * 00:03:30 – TCO is important, flash provides better environmental costs. * 00:05:00 – Licensing – should you have to pay for licences you will never use? (discussed on #39) * 00:06:30 – How is the price for flash declining? (discussed on #9) * 00:08:20 – The 100TB flash drive. (discussed on #44) * 00:09:00 – With flash, do we need tiering? * 00:10:00 – Will we have a spectrum of flash or will SLC and MLC disappear? * 00:14:00 – Could flash drives actually last much longer than the vendors claim? * 00:14:50 – Backblaze data, making choices from available information. * 00:18:00 – Are we at a time when a flash-first policy is realistic? * 00:19:00 – Is there any point to hybrid arrays? * 00:20:30 – Now that prices are in parity, would an Opex solution be more appropriate? * 00:22:00 – Does flash make it easier to buy on demand and expand at smaller increments? * 00:24:00 – Can we afford not to move to a flash solution with todays performance needs? * 00:25:00 – It’s a Wrap Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 934E.

 #53 – Persistent Storage and Kubernetes with Evan Powell | File Type: audio/mpeg | Duration: 27:51

This week Chris and Martin talk to Evan Powell, CEO of OpenEBS and formerly the founding CEO of Nexenta.  The conversation covers the use of persistent storage with container orchestration tool, Kubernetes.  Despite what the industry might think, persistent storage that can be mapped to a container (or in this case pod) is still an important problem to solve. Evan sets the scene for us with some background on pods, stateful sets, claims and storage classes.  As the conversation proceeds, the team discusses the way in which developers expect to consume cloud-native storage and in particular the abstraction of infrastructure.  Evan provides some extra background on CSI (Container Storage Interface) and how it is allowing storage vendors to interface with Kubernetes without lots of code changes.  Finally the wrap up covers what gaps are missing in current development and the projects Evan is working on. You can find more information on OpenEBS at https://www.openebs.io/, with links to MayaOnline and Litmus, which Evan references at the end of the recording.  The discussion also mentions the CNCF or Cloud Native Computing Foundation, under which Kubernetes has been developed.  Evan makes reference to the Intel Software Performance Development Kit and to Google Fuchsia. Why not give us some feedback?  You can find us on LinkedIn here. Elapsed Time: 00:27:51 Timeline * 00:00:00 – Intro. * 00:01:00 – How is persistent storage connected to Kubernetes? * 00:03:00 – Pods, stateful sets, claims, what does it all mean? * 00:05:30 – Defining storage as code with storage classes. * 00:06:30 – Static versus dynamic provisioning. * 00:08:00 – Application or infrastructure-level data resilience? * 09:00:00 – Developers, developers, developers! * 00:11:00 – CSI – Container Storage Interface. * 00:14:30 – Does CSI have enough industry knowledge? * 00:16:00 – The storage industry has a terrible management software reputation. * 00:17:00 – Is there an opportunity to bypass traditional storage protocols? * 00:22:00 – Hurrah – Mainframe!  System Managed Storage. * 00:23:00 – So where are the gaps, what’s missing? * 00:26:00 – What is OpenEBS. Maya online and Litmus? * 00:27:00 – Wrap Up. Evan’s Bio Evan Powell is CEO of MayaData where he helped found the popular OpenEBS containerized storage project.  Previously he has been founding CEO of three enterprise infrastructure software companies, Clarus Systems (RVBD), Nexenta Systems, and most recently StackStorm (BRCD). Under his leadership Nexenta became the leader of the OpenStorage movement and the software defined storage sector, with thousands of customers and hundreds of millions of dollars of annual partner sales. As founding CEO of StackStorm, Evan helped to create the event driven automation category, supporting a vibrant open source community leveraging and improving upon approaches used by the largest operators such as Facebook.   Prominent StackStorm users include Netflix, Cisco, and Dimension Data.  Brocade purchased StackStorm in 2016. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode JJ39.

 #52 – An Introduction to WekaIO Matrix with Liran Zvibel (Sponsored) | File Type: audio/mpeg | Duration: 27:27

This week’s guest episode was recorded live in Silicon Valley at the offices of WekaIO.  The company has developed a scale-out parallel file system called Matrix that was specifically designed to exploit NVMe storage and new fast networking.  Chris is onsite to talk with CEO and co-founder, Liran Zivbel.  Martin is dialled in remotely from the bowels of the Storage Unpacked offices. The conversation covers how Matrix was developed to work with new media and at the same time address some of the issues seen in the use of parallel file systems such as managing high levels of throughput and small file content.  Building a scale-out file system is challenging, and Weka developed their own user space storage operating system to ensure the performance of NVMe could be fully exploited.  Liran discusses the SPEC SFS 2014 benchmark tests, which can be found here.  Matrix is available for AWS on i3 EC2 instances.  More details can be found here (PDF). Elapsed Time: 00:27:27 Timeline * 00:00:00 – Intros * 00:01:00 – What is WekaIO Matrix? * 00:02:20 – Why are parallel file systems hard to build? * 00:04:30 – What is it that end users want from a parallel file system? * 00:05:00 – How is Matrix packaged and delivered? * 00:07:00 – A real-time userspace storage operating system. * 00:08:00 – Native Linux and a native file system driver. * 00:11:00 – Snap to object – store snapshots on an object store. * 00:13:00 – How is data protected within Matrix? * 00:16:00 – How does performance scale with additional hardware? * 00:17:00 – How does Matrix deploy in public cloud? * 00:18:00 – Matrix SPEC SFS benchmark results. * 00:20:30 – What are the typical customer deployment models? * 00:22:00 – Storage pets versus storage cattle. * 00:24:00 – Consumption models.  Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode F7G4.

 #51 – Pure Accelerate Pregame | File Type: audio/mpeg | Duration: 19:25

This episode was recorded at Pure Accelerate in San Francisco on Tuesday 22nd May 2018 just before the event started.  Chris talks to Matt Leib and eventually Ray Lucchesi, initially about the role of the reseller, then eventually moving towards NVMe over Fabrics.  As with all live recorded podcasts, there’s a little background noise, including the weekly noon siren test.  The group speculates on what we can expect to be announced by Pure, the benefits of the Evergreen program, then whether we’re headed back to a hardware future for storage. In the recording Ray talks about a recent vendor interview.  We’re fairly certain he’s talking about Apeiron.  Look out for a follow-up to this podcast, discussing what was actually announced by Pure Storage. Elapsed Time: 00:19:25 Timeline * 00:00:00 – Intros. * 00:01:30 – The role of the reseller at tech conferences. * 00:04:00 – What is the rate of technology adoption really like? * 00:07:00 – Argh!  Single Pain of Glass!! * 00:08:00 – What about NVMe adoption? * 00:08:45 – Will Pure release new technology? * 00:10:00 – Evergreen – good for Matt’s customers. * 00:11:30 – Ray joins us for a discussion of NVMe over Fabrics (and Apeiron) * 00:13:30 – And we get an air raid warning test! * 00:15:00 – What comes after NVMeoF? * 00:16:00 – Are we headed back towards a hardware future for storage again? * 00:17:00 – What is the impact for Software Defined Storage? Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 2G0J. Disclaimer: Chris was an invited guest at Pure Accelerate.  Pure Storage covered flights and accommodation costs.  There is no incentive or requirement to produce media from the event and no review of any content is made by Pure Storage before publication.  Pure Storage Inc is a client of Brookend Ltd.

 #50 – Introduction to Quantum Computing | File Type: audio/mpeg | Duration: 27:01

How does quantum computing relate to storage?  This week’s episode is a bit of a diversion from our normal podcasting topics.  In episode #47 (Enterprise Storage is not Boring), we mused as to how quantum computers use data.  With no real knowledge between us, we asked for listeners to help us out.  As a result, we’re joined in this episode by Scott Crowder, Vice President & CTO, Quantum Computing, Technical Strategy & Transformation, IBM Systems.  That’s a long job title, Scott! During the show we try and get to the bottom of what quantum computing really means.  What niche does it serve?  What does a quantum computer look like?  How do I program it?  We also ask the ever present question on encryption and whether quantum will break all of our secure transmission protocols. As for the question on data, it seems quantum computing is more mundane in the storage area than expected.  We can expect classic computers to be around for a long time to do the traditional number crunching. The podcast references number of items, including: vector co-processors for mainframe, Grover’s Algorithm, rotating wave approximation, Shor’s Algorithm and of course links to IBM’s Q Experience and the Q Network.  There is also some additional background on quantum computing from IBM here and you can try out the card test game here, which shows how quantum processing differs from classical computing. Elapsed Time: 00:27:01 Timeline * 00:00:00 – Intros. * 00:00:30 – What is Quantum Computing? * 00:02:40 – Quantum Mechanics – Superposition and Entanglement. * 00:05:00 – It’s all about solving exponentially hard problems. * 00:05:16 – Some examples, factoring, quantum chemistry, travelling salesman. * 00:07:00 – What does entanglement mean? * 00:08:00 – What is a qubit? * 00:11:25 – What does a quantum computer actually look like? * 00:13:00 – Programming with Python! With a classic and quantum mix. * 00:15:00 – But how do we actually measure the result of an execution of code? * 00:19:00 – How does this relate to storage, data and practical applications? * 00:22:00 – How will Quantum Computing impact encryption? * 00:24:20 – What is IBM using Quantum Computing for today? * 00:26:00 – Wrap Up. Scott’s Bio Scott Crowder is currently Chief Technical Officer and Vice President, Quantum Computing, Technical Strategy and Transformation for IBM Systems. His responsibilities include leading the commercialization effort for quantum computers, driving the strategic direction across the hardware and software-defined systems portfolio, leading the agile and Design Thinking transformation, and accelerating innovation within development through special projects. Copyright (c) 2016-2018 Storage Unpacked.  Post #Z5RL.  No reproduction or re-use without permission. Image by UCL Mathematical and Physical Sciences from London, UK (Quantum refrigerator at UCL) [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

 #49 – Reputation in Technology Marketing | File Type: audio/mpeg | Duration: 29:36

This week Chris and Martin talk to Gina Minks, who works on product marketing at VMware within the Cloud BU.  The discussion evolved from a post Gina published on www.24x7itconnection.com talking about how vendors shouldn’t go negative when promoting their own technology.  We’ve seen hyperbole from many vendors in the past – you know who you are! During the conversation, the topics cover the original Twitpisses that used to take place regularly on social media.  This leads on to the independence of bloggers who have been acquired by big corporations yet still use their own blogging domains. From there the discussion talks about the use of influencers in marketing and how credibility and authenticity can help in the promotion of products and services.  Finally we wrap up with a discussion on whether companies need to “go big” in order to get themselves heard above the increasing noise of social media and blogging.  Apologies for the mid-podcast clicking – that’s Gina fiddling with her beads! Remember you can find Gina on twitter at https://twitter.com/gminks, at www.24x7itconnection.com and Wide World of Tech. Elapsed Time: 00:29:36 Timeline * 00:00:00 – Intro * 00:02:00 – Gina’s blog post on marketing reputation * 00:06:30 – Remember the epic Twitpiss conversations? * 00:07:30 – “Independent” bloggers that actually work for a corporation. * 00:08:40 – Who’s Chad? * 00:13:30 – How much do influencers affect the marketing of vendors? * 00:15:30 – Bacon rolls again, Martin * 00:18:00 – Marketing seems related to credibility * 00:20:00 – How does authenticity and knowing the person, drive better marketing? * 00:22:10 – How can reputation be managed? * 00:24:30 – Do companies need to “go big” with their marketing to get the story across? * 00:27:00 – Wrap Up Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission.

 #48 – Introduction to Datrium DVX (Sponsored) | File Type: audio/mpeg | Duration: 32:30

This week’s podcast was recorded on the road a few weeks ago in Silicon Valley.  Chris dropped into the Datrium offices in Sunnyvale to talk about DVX, Datrium’s open converged platform.  Taking part in the podcast is Sazzala Reddy, CTO and co-founder and Tushar Agrawal, Director of Products. Datrium DVX is an extension of HCI that scales performance and capacity separately.  Persistent storage is deployed on a shared array, with each compute host using local cache in the form of flash storage.  You can find out more on a recent blog post covering DVX. The conversation covers an explanation on why traditional HCI has issues scaling and how Datrium has solved some of these challenges.  This leads on to a discussion on the features of DVX releases, including DVX 4.0, which offers backup to public cloud. During the podcast Sazzala mentions Little’s Law (details here).  He also discusses Log Structured File Systems and VMware founder Mendel Rosenblum.  A copy of his paper can be found here.  Also, here’s the original Google MapReduce paper. Elapsed Time: 00:32:30 Timeline * 00:00:00 – Intro * 00:01:00 – The challenges of the data centre * 00:02:00 – The heritage of knowledge on data deduplication * 00:03:00 – Understanding the Datrium architecture * 00:06:00 – What are the issues with traditional HCI? * 00:08:00 – Splitting storage performance and capacity * 00:10:00 – What improvements are customers seeing? * 00:12:00 – Log Structured File System implementation * 00:14:00 – How does failover/recovery work? * 00:15:00 – How does the solution address some of the backup issues? * 00:19:00 – Deep Dive – Global Deduplication, compression, data integrity * 00:24:30 – Deployment models * 00:27:00 – DVX Version 4.0 and a recap of the previous releases * 00:30:30 – What is Cloud DVX? * 00:31:30 – Wrap Up Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission.

 #47 – Enterprise Storage is Not Boring | File Type: audio/mpeg | Duration: 31:54

This week Martin and Chris talk to Stephen Foskett, chief organiser at Tech Field Day.  The team refute the concept that enterprise storage is boring, as posited by good friend Keith Townsend, also known as The CTO Advisor.  To be fair, Keith was putting a positive spin on storage, however we thought it would be good to expand the conversation and look at why storage is so innovative.  It’s clear that products are driven by a need to constantly improve the status quo, whether that be reducing costs, increasing capacity or performance. The discussion touches on hardware, software and the quality of management products in the market today.  At the end of the conversation we ask how data is handled in Quantum Computing.  If you can help answer this, drop us a line and you could be on the show! Elapsed Time: 00:31:54 Timeline * 00:00:00 – Intro * 00:01:30 – Keith claims Enterprise Storage is boring * 00:02:00 – Why is there so much happening in the storage industry? * 00:04:00 – $7 for 32GB thumbnail-size storage * 00:05:15 – Do storage innovations reach us in waves? * 00:06:20 – Trade-offs – do they introduce diversification? * 00:07:00 – What observations can we see from Storage Field Day companies? * 00:08:30 – Storage gestation period – Storage Developer Conference * 00:10:10 – Storage has an important job to do! * 00:12:00 – What did Dell use Ocarina for? * 00:12:30 – People, that recycling and improvement of ideas * 00:15:30 – Hardware still drives software * 00:17:00 – Mass Storage Subsystem! * 00:19:00 – SRM software is rubbish! (says Martin) * 00:22:30 – What storage product hasn’t been successful and never made it? * 00:26:30 – Has Stephen become any better at detecting good companies? * 00:29:04 – Talk about interesting, how is data managed in Quantum Computing? * 00:30:50 – Wrap Up Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission.

 #46 – Another View on Open Source Storage with Neil Levine | File Type: audio/mpeg | Duration: 32:26

A few weeks ago we discussed Open Source storage and whether it had any place in the enterprise.  There was a lot of feedback, so we thought it would be good to follow up with another discussion, so we invited Neil Levine, Director of Product Management at Red Hat to give us his view.  Martin, Chris and Neil discuss the difference in buying approach and whether bringing storage in under the radar is a good strategy. The conversation moves on to talk about the Facebook strategy of “move fast and break things”, the situation with Dell EMC ScaleIO and whether Open Source storage still has lock-in.  Finally, we wrap up discussing super architects. Elapsed Time: 00:32:26 Timeline * 00:00:00 – Intro * 00:01:30 – How does the buying strategy differ with Open Source? * 00:04:30 – Storage is being included as part of a wider platform strategy. * 00:06:30 – How loose are evaluation processes for storage? * 00:10:00 – Move fast and break things – is that appropriate for storage? * 00:13:00 – With such long POC’s, is there a trust problem for legacy vendors? * 00:16:30 – ScaleIO – not entirely dedicated to Dell hardware. * 00:20:00 – Is there still lock-in with Open Source? * 00:21:00 – Do we need super architects to design solutions that use cloud? * 00:23:30 – How are new platforms driving Open Source storage? * 00:30:49 – Wrap up Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission.

 #45 – Modern Software Defined Storage With Avinash Lakshman | File Type: audio/mpeg | Duration: 24:06

This week Chris is in the Bay area and was able to catch up for a chat with Hedvig CEO Avinash Lakshman.  Hedvig is a start-up developing a scale-out software-defined storage platform, which makes Avinash the perfect guest to explain just where SDS has reached.  Avinash also has serious SDS credentials, having co-developed Amazon Dynamo and developed Cassandra for Facebook. The conversation starts with a review of where SDS has reached compared to just five years ago.  Chris and Avinash discuss whether the enterprise has really bought into SDS as a concept and whether public cloud is driving the use of SDS in a multi-cloud configuration.  Finally, the conversation finishes on how DevOps and the developer movement wants to consume storage resources. Hedvig can be found at https://www.hedvig.io (new website) and on Twitter at https://twitter.com/hedviginc. Elapsed Time: 00:24:06 Timeline * 00:00:00 – Intro * 00:01:50 – How has SDS matured compared to 5 years ago? * 00:03:20 – Has the enterprise bought into SDS? * 00:05:00 – What are the most important features of SDS? * 00:07:24 – Is HCI delivered by SDS, what is the dependency? * 00:08:20 – How has public cloud affected the adoption of SDS? * 00:10:00 – Does the enterprise really want to move multi-cloud? * 00:12:00 – How much is data mobility a factor of SDS solutions? * 00:14:30 – What agility is multi-cloud providing for end users? * 00:16:00 – New storage services sit on top of public cloud and add value * 00:17:00 – What impact is SDS having on DevOps * 00;18:30 – APIs – essential to delivering efficient SDS * 00:20:00 – What is the future of SDS? * 00:23:23 – Wrap Up Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission.

Comments

Login or signup comment.