Storage Unpacked Podcast show

Storage Unpacked Podcast

Summary: Storage Unpacked is a technology podcast that focuses on the enterprise storage market. Chris Evans, Martin Glassborow and guests discuss technology issues with vendors and industry experts.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Storage Unpacked Podcast
  • Copyright: Copyright © 2016-2021 Brookend Ltd

Podcasts:

 #89 – Choices in NVMe Architectures | File Type: audio/mpeg | Duration: 27:37

This episode was recorded while Chris was in Silicon Valley for Storage Field Day 18.  Although not part of the event, Chris took time to catch up with Pavilion Data. in particular Jeff Sosa, VP Product Management and Walt Hinton, Head of Corporate and Product Marketing.  Both Jeff and Walt have a long history in the storage industry, including companies like Virident, Western Digital, Fusion-io, Brocade, NetApp and Data Domain – to name but a few!  It’s great to get opinions on the future of storage technology from industry experts, in this case on the subject of NVMe architectures. NVMe represents a paradigm shift in how storage platforms are implemented.  Where companies moved away from SAN to get local performance and agility, new architectures are enabling modern applications to take advantage of shared rack-scale storage and compute.  Exactly how this plays out is still in process, but we can see a move towards the SAN technology that was so successful in the early part of the new millennium. More information on Pavilion Data can be found at paviliondata.com. Elapsed Time: 00:27:37 Timeline * 00:00:00 – Intros * 00:02:00 – Are we seeing the industry cycle around again with SANs? * 00:03:00 – Industrial Military Complex – Microsoft, Oracle! * 00:06:30 – Why did the new apps not use Fibre Channel? * 00:09:00 – Understanding storage disaggregation * 00:10:30 – Could scale out databases still benefit from shared storage? * 00:13:00 – How can pricing be maintained without data reduction? * 00:18:00 – Do super-high capacity drives force a return to shared storage? * 00:21:00 – Should architectures start using cache-less writes? * 00:24:00 – How are rack-scale storage solutions being adopted? * 00:27:30 – Wrap Up Related Podcasts & Blogs * #84 – Discussing New NVMe Form Factors with Jonathan Hinkle * #76 – Fibre Channel and NVMe with Mark Jones * #61 – Introduction to NVM Express with Amber Huffman * The Race towards End-to-End NVMe in the Data Centre Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 8D01.

 #88 – Nigel Tozer returns to talk about Ransomware | File Type: audio/mpeg | Duration: 32:02

This week Chris and Martin talk to Nigel Tozer, Solutions Marketing Director for EMEA at Commvault.  Nigel was a guest about 12 months ago and on that episode he talked about GDPR.  This time the discussion is about ransomware and what businesses can do about it. The challenges of protecting data from theft or extortion are greater than ever.  So, can we identify a common attack model?  Are specific operating systems more vulnerable?  Most important, how do you develop a plan that protects your data and systems?  The process is more than just patching primary systems (although we talk about that).  It also means ensuring backup data is safe, because this is the main means of recovery. Nigel references the National Cyber Security Centre, which can be found here.  https://www.ncsc.gov.uk/ Elapsed Time: 00:32:02 Timeline * 00:00:00 – Intros * 00:01:20 – GDPR – was it a storm in a teacup? * 00:02:30 – What is ransomware? * 00:04:00 – Is there a common attack surface? * 00:06:20 – How does Ransomware get installed? * 00:07:12 – Ransomware as a service! * 00:08:30 – Should you pay a ransom? * 00:10:52 – What are the financial impacts for enterprises? * 00:14:30 – Could backups also get compromised? * 00:17:00 – How do you protect official exits? * 00:20:00 – How do you know when you have active ransomware? * 00:23:00 – Defence in depth, networking protection is needed too * 00:23:00 – How should patching be done? * 00:27:00 – The US government shutdown exposed risk to ransomware * 00:29:00 – So what strategy should be used to deal with ransomware? * 00:31:00 – Wrap Up Related Podcasts & Blogs * Garbage Collection #008 – Chris’ Travels – Commvault, NetApp and SFD14 * #86 – Eran Brown Discusses Storage, Security & Multi-Cloud * Introduction to GDPR with Nigel Tozer from Commvault Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 7CCD.

 #87 – The Risks of Storage Media Reuse versus Recycling | File Type: audio/mpeg | Duration: 27:28

In this episode, Chris and Martin talk to Simon Zola from Avtel Data Destruction.  This is a follow-up episode to #85 (Storage for Home and Homelabs) where Martin and Chris questioned the ease of recovering data from pre-owned storage media.  As we find out, the process was pretty easy.  Martin was able to recover significant amounts of personal data with little effort and using standard tools. Media destruction is one route to safely dispose of drives and guarantees that data is destroyed.  But what are the consequences?  Simply disabling a drive and throwing it into landfill isn’t good for anyone.  An alternative is to recycle the media.  Simon Zola from Avtel Data Destruction takes us through their hardware milling process, capable of turning a hard drive to dust in 7 seconds!  We discuss the environmental and safety aspects of such a process and learn that the downstream waste produced by the process has ongoing value.  Nothing goes to landfill. You can find more about Avtel on their website – here.  There’s also a video on YouTube that provides an idea of how the process works. Elapsed Time: 00:27:28 Timeline * 00:00:00 – Intros * 00:01:30 – What did Martin find with used SSDs? * 00:04:53 – Why should we be destroying drives? * 00:06:44 – Will encrypted drives have more value? * 00:07:25 – 25 million mobile phones kept in cupboards * 00:08:30 – How does Avtel destroy media? * 00:10:00 – Japan Olympics medals will come from recycled mobiles * 00:12:20 – A chain of custody is as important as destruction * 00:19:00 – What approach should businesses take with media? * 00:21:00 – Are desktop drives an exposure for businesses? * 00:22:00 – Are there any options for home users? * 00:25:00 – Should vendors be taking more responsibility (or are they)? * 00:26:40 – Wrap Up Related Podcasts & Blogs * #85 – Storage for Home and Homelabs Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 94C8.

 #86 – Eran Brown Discusses Storage, Security & Multi-Cloud | File Type: audio/mpeg | Duration: 34:07

This week, Chris and Martin talk to Eran Brown, CTO in EMEA for INFINIDAT.  As we move to a more distributed world, implementing data security becomes ever more challenging.  Eran explains the issues and how his customers are looking to solve the problem. One of the initial choices is to decide exactly where to encrypt.  We’re familiar with drive-level encryption, even host-level.  Eran sees hypervisor and application encryption being equally if not more important. What are the implications on technology?  Encryption reduces savings from de-duplication and compression, changing the cost model.  Encryption and data security needs to be applied equally to secondary backup data, which remains a big exposure for many companies. As the conversation moves towards public cloud and micro-services, it’s clear that data gravity poses a challenge for building out a secure data framework.  It’s about choosing the right centre of gravity for data – not applications. You can find more about INFINIDAT at their EMEA homepage here.  Eran discusses two technologies; Oracle TDE (Transparent Data Encryption) and vSphere encryption. Elapsed Time: 00:34:07 Timeline * 00:00:00 – Intro * 00:02:00 – Is co-location and cloud being heavily adopted? * 00:03:00 – How “high up” should data be secured? * 00:04:00 – The issues revolve around skills, rather than technology * 00:06:00 – What are the security skills we need? * 00:08:00 – Where can encryption be done? * 00:10:26 – How do we map risk to applications? * 00:12:50 – How do we deal with legacy systems? * 00:13:30 – How will key management work in distributed environments? * 00:14:40 – What breaks when we start encrypting upstream? * 00:17:30 – Is backup secure enough? * 00:19:34 – How will we manage trust for short-lived micro services? * 00:24:00 – Data Gravity will influence where data is managed from * 00:28:20 – How should end users develop a data security strategy? * 00:32:00 – Wrap Up Related Podcasts & Blogs * #82 – Storage Predictions for 2019 * Data Protection in a Multi-Cloud World * Evolving Enterprise Storage Buying Models * Enterprise-Class Public Cloud Eran’s Bio Eran Brown is CTO, EMEA, at INFINIDAT.  With over 15 years in the storage industry, Brown has an extensive technical background in networking, security, virtualisation, Big Data tools and other critical infrastructure components. Having gained broad experience as a Pre-Sales Engineer at NetApp to a Senior Product Manager at INFINIDAT, Brown has proven results in leading technical sales activities and delivering new products to market. In addition to this, he has familiarised himself with the multiple verticals throughout his career including the financial sector, oil & gas, telecoms and software industry – assisting in the planning, design and deployment of scalable infrastructure to support business applications. Copyright (c) 2016-2019 Storage Unpacked.

 #85 – Storage for Home and Homelabs | File Type: audio/mpeg | Duration: 28:15

This week Chris and Martin discuss the subject of storage for home and homelabs.  With so much content being personally generated these days, is it better to store data on local hardware or use the public cloud?  It turns out that many of the home storage devices we’re familiar with are pushing more to SMB and enterprise-class features. Alternatively, if you’re like Martin, you could build your own solution from drives, Linux and an ATX chassis.  A quick check online shows both HDDs and SSDs for sale.  Are users being careful about wiping their data before selling?  We’ll revisit this subject in a few weeks and find out. For homelabs, there are so many options.  Home NAS platforms will export iSCSI and NFS.  Alternatively, solutions can be built from open source software.  There are even simulators for many of the older storage platforms.  These provide for the ability to learn about storage solutions or test things out in a non-production environment. Here are links to the simulators and other software we discussed: * Dell EMC Unity * Dell EMC Isilon Simulator * NetApp ONTAP Simulator (login required) * Ceph * Gluster * Lustre * Synology Cloud Sync Elapsed Time: 00:28:15 Timeline * 00:00:00 – Intros * 00:00:20 – Blue Monday! * 00:02:00 – Why use home storage with public cloud availability? * 00:03:30 – People are starting to store terabytes of personal data * 00:04:00 – Is storage too cheap to bother being organised? * 00:04:30 – What are the home NAS options?  Buy or build? * 00:07:30 – Make sure you backup and backup again! * 00:09:30 – Can home NAS devices tier to the public cloud? * 00:11:30 – Turns out Synology is pretty fully featured! * 00:14:00 – What about encryption and data destruction? * 00:16:00 – Default encrypt when uploading to cloud? * 00:17:00 – Checking out cheap SSDs on Ebay * 00:19:00 – What are the storage options for a home lab? * 00:21:00 – What could you build with open source? * 00:22:30 – There are also storage software simulators * 00:27:00 – Wrap Up Related Podcasts & Blogs *  Home Lab: Building VM Images   Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode F8A8.

 #84 – Discussing New NVMe Form Factors with Jonathan Hinkle | File Type: audio/mpeg | Duration: 33:21

This week, Chris and Martin talk to Jonathan Hinkle, Principal Researcher – System Architecture and Master Inventor at Lenovo.  As solid-state storage and NVMe become more prevalent, the legacy 2.5″ HDD format is increasingly impractical for getting the best density, power and cooling in servers.  Jonathan explains how his work led to the founding of the Enterprise and Data Centre Small Form Factor working group and new device formats being introduced into the industry. EDSFF is responsible for managing device standards like Intel Ruler (see a previous blog episode covering this technology).  How did the standards evolve, which vendors have contributed and how can we expect vendors to integrate these standards into their products? More information on EDSFF can be found at https://edsffspec.org/ Elapsed Time: 00:33:02 Timeline * 0:00:00 – Intros * 00:02:00 – EDSFF – Enterprise and Data Centre Small Form Factor Group * 00:02:30 – The legacy hard drive format * 00:04:50 – Remember Fusion-io?  The first AIC card * 00:06:10 – The HDD form factor was limiting for solid-state devices * 00:06:50 – The HDD “box” was bad for airflow * 00:08:30 – Power can be done differently with SSDs * 00:10:00 – New form factors enable much better server configurations * 00:10:50 – Where did the new form factors and EDSFF come from? * 00:13:00 – Lenovo, Intel and Samsung align, MSFT, HPE, Facebook & Dell * 00:15:00 – Intel brought the Ruler format into the group * 00:18:00 – How far can the new standard last? 5 years, 7 years? * 00:20:50 – What determined the device dimensions? * 00:22:50 – What does that mean for devices in a single server? * 00:23:00 – Use cases, HCI, hyper scale, disaggregated solutions * 00:27:00 – Disaggregated solutions like E8 Storage and Excelero could use this tech * 00:28:30 – Could compute be added to the Ruler format? * 00:30:00 – Are AIC and HDD solid-state formats effectively dead? * 00:31:00 – Could new formats be used for old uses, like NICs? Related Podcasts & Blogs * #61 – Introduction to NVM Express with Amber Huffman * Samsung Introduces 8TB NF1 NVMe SSDs * Has NVMe Killed off NVDIMM? * Flash Capacities and Failure Domains Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode B222.

 #83 – Introduction to NetApp MAX Data | File Type: audio/mpeg | Duration: 25:03

On this week’s podcast, recorded live at NetApp Insight 2018, Chris talks to Greg Knieriemen and Rob McDonald about the introduction of Memory Accelerated Data, commonly called MAX Data.  The MAX Data solution is a software product that implements a local file system on a server using local persistent memory such as Intel Optane.  Of course, this is what DAS (Direct Attached Storage) used to offer 20 years ago and this is definitely not what MAX Data provides.  Protection against loss within a server is achieved with a feature called MAX Recovery that synchronously replicates data to another backup server.  Data is also tiered to NetApp AFF storage as an additional level of protection. The question is how the technology could be used.  The most obvious benefit is to speed up existing applications, however, performance improvements could also result in reducing licensing costs (e.g. enterprise databases) and more efficient use of server hardware (see episode #82 on storage predictions and SCM).  Greg neatly sidesteps the question of using MAX Data and NetApp AI, but this seems like probably the most obvious use case. For more details on MAX Data, check out the landing page that Rob recommends.  Rob also references a blog post that talks a bit further about database acceleration.  You can find that here.  We talked to Greg about his move to NetApp in episode #78.  You can find that here. Elapsed Time: 00:25:03 Timeline * 00:00:00 – Intros * 00:01:40 – What is MAX Data? * 00:03:30 – What persistent memory does MAX Data use? * 00:06:16 – Application transparency, appearing as a local file system * 00:07:10 – How is MAX Data not DAS? * 00:10:00 – MAX Data is a real tier of storage * 00:13:40 – MAX Data integrated with existing shared AFF systems * 00:15:30 – Where else could the software be used?  Cloud? AI? * 00:18:30 – Applications don’t need to be rewritten to use MAX Data * 00:22:10 – Greg ducks the question of using MAX Data with NetApp AI * 00:24:10 – Wrap Up Related Podcasts & Blogs * #82 – Storage Predictions for 2019 * #78 – Thoughts on NetApp with Greg Knieriemen * #80 – Discussing NetApp’s AI Strategy with Santosh Rao Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode IDUF.

 #82 – Storage Predictions for 2019 | File Type: audio/mpeg | Duration: 35:56

The idea of this episode was to put some structure around a set of enterprise storage predictions for 2019.  As you will hear from the dialog, that’s not quite what we achieved!  However, Chris Evans, Chris Mellor and Martin Glassborow do raise some interesting points on the direction of the industry in 2019.  The conversation starts with a look at media.  QLC flash is likely to be a hot topic, but what about storage class memory?  Have hard drives had their day or is the technology moving into a state of equilibrium? The conversation moves onto standards and protocols – NVMe in particular.  There’s a real challenge around using Fibre Channel or Ethernet for block storage (Ethernet has already won in the file and object space).  But how exactly will NVMe adoption proceed, especially for public cloud? Probably the biggest talking point is data management.  Technology improves consistently, however, the challenge of managing data still persists.  What can we expect in 2019, do we need an Open Data Management model? The wrap up looks at a few vendors and who will be big and who is in trouble.  Possibly the biggest challenge is over-supply in the SAN and filer market, according to Chris M.  We will see! Elapsed Time: 00:35:56 Timeline * 00:00:00 – Intros * 00:00:38 – Who’s Gavin? * 00:01:15 – Media – QLC gains more traction * 00:03:00 – Will 2019 be the arrival of SCM? * 00:04:50 – Has 3D-Xpoint development stagnated? * 00:05:30 – SCM will drive container adoption * 00:06:30 – Will HPE put SCM into Synergy? * 00:08:10 – 2019 – The year of optical! * 00:09:00 – NVMe (over Fabrics) will be big in 2019 * 00:10:00 – How about NVMe/TCP in Public Cloud? * 00:13:00 – How does NVMe-oF compare to FCoE? * 00:18:00 – The Storage Admin is not dead – yet! * 00:18:30 – An Open Data Management model – like OSI? * 00:22:00 – Do we need a dominant player in data management? * 00:23:30 – S3 killed CDMI * 00:25:00 – Can secondary data solution vendors solve data management? * 00:27:00 – Which vendors will do well during 2019?  Pure Storage? * 00:30:00 – Could better focus be important for storage in 2019? * 00:32:00 – Chris sees some saturation in SAN and filer markets * 00:33:30 – Summing up – data management is the biggest challenge * 00:35:10 – Wrap Up Related Podcasts & Blogs * #81 -Storage or Data Asset Management * #79 – Learning More About QLC Storage with Steve Hanna * #76 – Fibre Channel and NVMe with Mark Jones * The Biggest Storage Trends of 2019 * Processors back under the spotlight for 2019 Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 39A2.

 #81 – Storage or Data Asset Management? | File Type: audio/mpeg | Duration: 23:16

This week Chris and Martin talk about the evolution from storage management to data management.  This follows from recent vendor events where data management featured highly, but still seemed to focus on infrastructure products.  Is there a definition that can bridge the gap – something like Data Asset Management? The team start by trying to get a handle on what storage and data management actually mean.  In the data protection world, for example, DLP – data leakage prevention, or data loss prevention refer to more advanced versions of simple backup.  Things get more complex when looking at whether the data as a single entity or the content within the data is the thing under management. What data management work are secondary storage vendors doing?  Can backup appliance vendors be classed as data managers?  Why are storage vendors not making more of their own software and helping to drive product strategies? In reality, some of this work is already in place.  At the time of recording, episode #77 (HPE Performance Insights) had not been published.  The discussion with Ivan Iannaccone discusses how HPE in particular is using the data from the field to drive better product solutions. From Dell’s perspective, the acquisition of Data Frameworks represents a step towards data asset management.  A link to the Dell Analyst presentation in Chicago that is mentioned in the podcast can be found here – Unlock The Power of Data.  Companies onstage at the Dell event were: * GraphCore * Immuta * ZingBox * JASK * Noodle.ai Elapsed Time: 00:23:16 Timeline * 00:00:00 – Intros * 00:01:30 – Do vendors offer real data management or something else? * 00:02:49 – How can we define data and storage management? * 00:04:10 – Data protection, more an advanced storage management task? * 00:04:40 – DLP – Data Loss/Leakage Prevention * 00:05:50 – A definition of data management * 00:07:40 – Are we blurring boundaries between storage admins, DBAs and app owners? * 00:08:30 – Dell invests in startups – how many are they actually using in their business? * 00:10:00 – Yet storage vendors have lots of data that could be managed * 00:12:00 – Do storage vendors drive their own products strategy from data? * 00:13:00 – How will SDS vendors use storage analytics data? * 00:15:00 – What about the secondary storage companies, like Cohesity? * 00:16:30 – Do we need a new term like Data Asset Management? * 00:18:40 – Vendor acquisitions Boomi & Data Frameworks * 00:21:00 – Wrap Up Related Podcasts & Blogs * #77 – HPE Performance Insights with Ivan Iannaccone * #75 – It’s ILM All Over Again with Chris Mellor * #72 – Hitachi Vantara Looking Forward with Shawn Rosemarin Copyright (c) 2016-2019 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode FD9C.

 #80 – Discussing NetApp’s AI Strategy with Santosh Rao | File Type: audio/mpeg | Duration: 24:01

In this podcast recorded at NetApp Insight 2018, Chris talks to Santosh Rao, Senior Technical Director at NetApp.  Santosh leads NetApp’s AI and Data Engineering efforts and has a 10-year history at the company, working on initiatives including Clustered ONTAP. This conversation covers how NetApp is developing an Edge-to-Core-to-Cloud strategy that includes collecting and processing data at the edge, storing it in the core and exploiting the AI tools of public cloud. End users want to make the most of CPU and GPU solutions, while minimising data movement.  Edge Solutions, including ONTAP Select provide the means to collect and store data with some pre-processing, if required.  The features of ONTAP allow data to be efficiently moved to the core and cloud for further analysis. More information on NetApp’s AI strategy can be found at https://www.netapp.com/ai. Elapsed Time: 00:24:01 Timeline * 00:00:00 – Intros * 00:01:20 – What is NetApp doing with AI? Data pipelines * 00:02:50 – What does Edge->Core->Cloud mean? * 00:05:00 – Entire devices are being used for de-stage of data * 00:06:10 – How much pre-processing is being done on the ONTAP Select device? * 00:08:25 – How does data get into the cloud for processing? * 00:09:40 – Customers are building multi-cloud architectures to use CSP AI technology * 00:11:30 – Tri-cloud architectures are being affected by GDPR * 00:12:25 – ONTAP AI – a software stack and hardware offering – include cloud * 00:14:50 – What are the biggest challenges for a distributed AI architecture? * 00:16:00 – ONTAP enables “data copy avoid” * 00:17:00 – Data is now the centre of the IT universe * 00:19:20 – NASA – 30 years of data, outliving the platform/compute * 00:21:40 – Where are we headed in the next 3-5 years? * 00:23:20 – Wrap Up Related Blogs & Podcasts * #78 – Thoughts on NetApp with Greg Knieriemen * Soundbytes #014: A Coversation with NetApp Founder Dave Hitz * Soundbytes: The Data Fabric Eplained with NetApp CTO Mark Bregman Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode QYNZ.

 #79 – Learning More About QLC Storage with Steve Hanna | File Type: audio/mpeg | Duration: 30:06

This week Chris and Martin talk to Steve Hanna, Senior Product Manager for Enterprise SSDs at Micron.  Steve looks after the 5210 ION QLC SSD product line, which was introduced earlier this year.  As an evolution of existing flash technology, QLC offers greater capacities at lower cost.  But what are the issues? The conversation covers how QLC has reduced the price to 1/100th of that seen in early enterprise SSDs.  Although endurance is lower than previous flash generations, in reality for many use cases the ability to write up to 1 DWPD is enough.  However, we will see QLC co-exist with TLC, rather than become a direct replacement.  Understanding workloads will be critical and the discussion covers this a few times, just for clarity. Steve points us at a couple of resources.  Check out https://micron.com/5210 for products specs on the 5120 ION drive and also https://micron.com/QLC for all things QLC.  Here you can find helpful information and guides to understanding the technology. Elapsed Time: 00:30:06 Timeline * 00:00:00 – Intros * 00:01:26 – What is QLC Flash? * 00:02:20 – Diminishing returns from bit density * 00:03:00 – Price reductions – $20 to $0.20/GB * 00:05:00 – How does endurance change with QLC? * 00:07:00 – QLC will be a read-intensive workload product * 00:09:40 – But QLC will not be an entire replacement for TLC * 00:10:30 – Will QLC kill the HDD?  I this a TCO play? * 00:14:00 – Use cases, like CDNs * 00:15:30 – How will workload mix differ with QLC? * 00:17:20 – Tiering will be back in SSDs * 00:18:30 – QLC will be the performance capacity tier * 00:19:30 – Why SATA though?  Why not NVMe for QLC? * 00:22:30 – Let’s talk workloads again. * 00:25:00 – Why are we not starting at 32TB for QLC drives? * 00:28:00 – Wrap up Related Blogs & Podcasts * #54 – Are we at All-Flash and HDD Array Price Parity? * Garbage Collection – All-flash Market Consolidation * #74 – All About Serial Attached SCSI with Rick Kutcipal * #44 – Ultra High Capacity Flash Drives * QLC NAND – how real is it and what can we expect from the technology? * Micron users in the era of QLC SSDs Steve’s Bio Steve is an acclaimed author, speaker, innovator, and leader. Steve has transformed products, led businesses, and brought new categories to life – like the world’s first QLC SSD, the hardware heartbeat of artificial intelligence, machine learning, analytics, media streaming, and more. Steve studied innovation and entrepreneurship at Stanford and has authored over 50 thought leadership pieces and delivered seminars on enterprise storage, artificial intelligence, machine learning, and building a winning team culture. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 5CAB.

 #78 – Thoughts on NetApp with Greg Knieriemen | File Type: audio/mpeg | Duration: 17:07

Last week Chris attended NetApp Insight in Barcelona and had a chance to chat to Greg Knieriemen, Chief Technologist in the Storage Systems & Software business at NetApp.  Greg was formerly at Hitachi Vantara where he held a similar role.  This conversation touches on Greg’s reasons for choosing NetApp and exactly what he will be focusing on within the company. Elapsed Time: 00:17:07 Timeline * 00:00:00 – Intros * 00:01:30 – Why move to NetApp? * 00:04:00 – Three BUs, Storage/Systems, Cloud Infrastructure, Cloud Services * 00:07:00 – The Data Fabric is much more real today than two years ago * 00:08:00 – Acquisitions, StackPoint, GreenQloud, SolidFire * 00:10:00 – Cultural change – allowing new processes to drive existing solutions * 00:11:00 – Where are the gaps?  Bringing other traditional products into the strategy * 00:13:00 – What will Greg be focusing on over the next 6 months? MAX Data * 00:16:00 – Wrap Up Related Podcasts & Blogs * Soundbytes #013: Hitachi Vantara with Greg Knieriemen Greg’s Bio Greg Knieriemen is the NetApp Chief Technologist and helps develop and drive the vision and application of NetApp products and solutions. Previously, Greg worked for Hitachi and was the founder of the Speaking in Tech Podcast. Greg has over 15 years of experience using, deploying and marketing enterprise IT solutions. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode CFWD.

 #77 – HPE Performance Insights with Ivan Iannaccone | File Type: audio/mpeg | Duration: 35:58

This week HPE announced Performance Insights, an extension to the InfoSight platform that provides storage customers with greater detail on their 3PAR array storage performance.  Performance Insights takes the knowledge of years of HPE 3PAR data and combines it with genuine machine learning and AI to give HPE 3PAR customers actionable insights on the performance characteristics of their storage arrays.  In particular this means being able to both detect performance problems and also do more accurate workload planning. A few weeks ago, Martin and Chris talked to Ivan Iannaccone, VP and GM for the HPE 3PAR product line.  Ivan takes us through the background of InfoSight, which came with the Nimble Storage acquisition.  The discussion digs down into why performance management is becoming too hard for individuals and how AI can do a much better job. For more information in the Performance Insights news, check out Calvin Zito’s Around the Storage Block blog and podcast. Elapsed Time: 00:35:58 Timeline * 00:00:00 – Intros * 00:02:30 – InfoSight – Nimble AI Analytics * 00:05:00 – What does InfoSight actually do? * 00:07:00 – InfoSight now being applied to 3PAR * 00:10:30 – InfoSight was the value add to a solid platform * 00:14:00 – How do we tackle issues with performance? * 00:19:00 – Using performance metrics to maximise investment * 00:22:00 – Saving customers real money by identifying problem issues * 00:24:00 – How will customers see Performance Insights on 3PAR? * 00:26:50 – What makes Performance Insights really ML/AI based? * 00:34:00 – Wrap Up Related Blogs & Podcasts * HPE Brings Nimble Skynet to 3PAR Arrays Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 89PY.

 #76 – Fibre Channel and NVMe with Mark Jones | File Type: audio/mpeg | Duration: 28:58

In this week’s podcast, Chris and Martin talk to Mark Jones from the Fibre Channel Industry Association.  This recording is an introduction to running NVMe over Fibre Channel, setting the scene on how Fibre Channel has evolved and will continue to be a storage protocol for many years. Fibre Channel reached an important milestone in 2018.  It’s been 30 years since development and 25 years since the first products were shipped.  Speeds started at 133Mb/s, while today, Gen 6 FC runs at 32Gb/s.  Higher speeds are coming in 2019 and can also be achieved with channel bonding. How does NVMe fit in with Fibre Channel?  FC can be used as the transport layer for many storage protocols, including FCP (SCSI), FICON and now NVMe.  We can expect to see significant performance improvements in performance and reductions in latency compared to FCP.  Even better, with the right level of equipment, both FCP and NVMe run side by side. For more information on FC-NVMe, check out the FCIA website at fibrechannel.org. Elapsed Time: 00:28:58 Timeline * 00:00:00 – Intros * 00:00:00 – What is the FCIA? * 00:02:00 – What is Fibre Channel * 00:04:30 – Yes, Fibre is the correct spelling * 00:05:00 – The FC-4 layer – where the storage protocols run * 00:06:30 – Which is fastest, Ethernet or Fibre Channel? * 00:09:00 – Let’s talk FC-NVMe! * 00:12:50 – How much of a performance gain is there with FC-NVMe? * 00:14:30 – Fibre Channel supports multiple storage protocols – at the same time * 00:16:40 – Is Fibre Channel adoption growing or decreasing? * 00:18:50 – What’s needed to install and use NVMe? * 00:23:40 – FC or Ethernet – place your bets * 00:27:30 – Wrap up Related Podcasts & Blogs * #74 – All About Serial Attached SCSI with Rich Kutcipal * #61 – Introduction to NVM Express with Amber Huffman * #59 – Ethernet vs Fibre Channel Mark’s Bio   Mark Jones has worked in the enterprise computing industry since 1984.  Currently is the Director of Technical Marketing for Broadcom Inc. Emulex Connectivity Division.  Mr. Jones comes to Broadcom via the acquisition of Emulex Corporation in 2015 for which he has worked since 2002.  For the 18 years prior he worked as a Strategic Solutions Manager for Burroughs/Unisys.  Mr. Jones has a Computer Science degree from the University of Redland and currently serves as the President of the Fibre Channel Industry Association.     Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 22YV.      

 #75 – It’s ILM All Over Again with Chris Mellor | File Type: audio/mpeg | Duration: 37:26

Data volumes have always increased over time as we store more information with the hope that one day some of it will be useful.  Even 30 years ago on the mainframe, Information Lifecycle Management or ILM was a thing with tools like DFHSM used to move content around between disk and tape.  This week Martin, Chris Evans and Chris Mellor talk about the new range of data management or ILM products that are looking to resolve the current issues of data sprawl. How do these products work?  It is all about data ingest, or just indexing?  What happens when data moves between platforms and how do files continue to be accessed?  Companies like Komprise claim to be able to move data without impacting the application or needing to use techniques like stubs.  Others like IBM Spectrum Discover, simply appear to do content indexing.  Ultimately, perhaps we need to move everything to object stores and dispense with file services altogether.  Can we ignore public cloud and can we get the S3 API moved to an open standard?  All questions the team attempt to answer on this week’s podcast. Elapsed Time: 00:37:26 Timeline * 00:00:00 – Intros * 00:01:00 – Why do we need data management or ILM? * 00:03:00 – How do we manage so many different data silos? * 00:04:40 – Data management gets conflated with other features * 00:05:00 – What vendors are there in this space? * 00:07:00 – ILM on the mainframe! * 00:09:00 – Should ILM functions be built into the file system? * 00:11:00 – How is public cloud influencing the ILM process? * 00:12:30 – Data gravity (or inertia) causes problems moving into the cloud * 00:16:00 – How can we solve the distributed data problem? * 00:18:00 – Hammerspace – separating data and metadata * 00:21:00 – Everyone needs ML & AI in their storage platforms * 00:22:00 – Have we simply not defined the data management problem? * 00:25:00 – Remember File Area Networks? * 00:27:00 – How does IBM Spectrum Discover work? * 00:31:00 – Qumulo QF2 – focused on fast metadata searches * 00:32:30 – We still have a lot of sticking plaster and bandaids in place * 00:33:40 – Should we just move everything to object stores? * 00:35:30 – Please Jeff, can you donate the S3 API to the community? * 00:36:30 – Wrap Up Related Podcasts & Blogs * #68 – Intelligent Object Storage with Scott Baker * #65 – Challenges in Managing Unstructured Data with Shirish Phatak * #60 – New Data Economy with Derek Dicker * Data Gravity Pointed the Way to Data Rather than Storage Management Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode HNWX.

Comments

Login or signup comment.