Storage Unpacked Podcast show

Storage Unpacked Podcast

Summary: Storage Unpacked is a technology podcast that focuses on the enterprise storage market. Chris Evans, Martin Glassborow and guests discuss technology issues with vendors and industry experts.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Storage Unpacked Podcast
  • Copyright: Copyright © 2016-2021 Brookend Ltd

Podcasts:

 #74 – All About Serial Attached SCSI with Rick Kutcipal | File Type: audio/mpeg | Duration: 22:21

Most people probably don’t think about the protocol that connects their storage device to a PC or server.  However, NVMe has changed that and become one of the storage buzzwords of 2018.  But what about existing devices not using NVMe?  SAS or Serial Attached SCSI has been a mainstay of storage connectivity for 20 years.  With 12Gb/s throughput, SAS provides back-end connectivity for almost all the enterprise storage arrays on the market today. The SAS protocol continues to be developed, with 24Gb/s speeds due in 2019.  However, SAS is not all about speed, as we find out when Chris and Martin talk to Rick Kutcipal, President of the SCSI Trade Association.  In fact, the SAS protocol has evolved with features to improve reliability and scalability.  New features are addressing storage technologies like SMR and HDD multi-actuator. As a technology we take for granted, there’s a lot more going on than we realise.  SAS will still have a future in the data centre, as it offers performance, reliability and possibly most important, scalability for vendors to build out large-scale storage solutions. You can find out more about SAS, at the SCSI Trade Association website or follow them on twitter at @sas_storage. Elapsed Time: 00:22:21 Timeline * 00:00:00 – Intro * 00:01:30 – What is the SCSI Trade Association? * 00:02:40 – What is a plugfest? * 00:03:40 – What is SAS (Serial Attached SCSI)? * 00:05:50 – SAS-4 is due – 24Gb/s throughput * 00:07:30 – Where does SAS fit in, with the emergence of NVMe? * 00:09:50 – Is SAS still good enough for most storage requirements? * 00:11:00 – When will 24Gb SAS be available and what will it give? * 00:12:15 – New encoding scheme 8b/10b to 128b/150b. * 00:13:30 – New storage intelligence features * 00:15:30 – Do end users and buyers care about this level of detail? * 00:18:00  – Protocol changes will come for SMR and multi-actuator support * 00:19:30 – So how should we position SAS, SATA and NVMe? * 00:21:00 – Call to Action – where to find out more? * 00:21:50 – Wrap Up Related Podcasts & Blogs * #61 – Introduction to NVM Express with Amber Huffman * #59 – Ethernet vs Fibre Channel * One (Storage) Protocol to Rule Them All? * NVMe over Fabrics – Caveat Emptor? Rick’s Bio Rick Kutcipal – President, SCSI Trade Association; Marketing Manager, Data Center Storage Group, Broadcom Inc.  Currently serving as a Marketing Manager in the Data Center Storage Group at Broadcom, Rick is a 25-year computer and data storage business veteran. He coordinates the majority of standards activities for Broadcom worldwide and serves at the President of the SCSI Trade Association. Rick received his bachelor’s and master’s degrees in electrical engineering from the University of Utah. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode IJ10.

 #73 – HYCU – Data Protection for Hyper-converged Infrastructure (Sponsored) | File Type: audio/mpeg | Duration: 34:25

This week, Chris and Martin talk to Subbiah Sundaram, VP of Products at HYCU, Inc.  HYCU is both the name of the company and the data protection product sold by HYCU Inc. The market already has many data protection solutions, so the unique differentiator for HYCU is focusing on the HCI market, specifically offering backup for Nutanix (both ESXi and AHV).  Of course, HCI platforms already have backup support, but as the discussion explains, having deeper integration into the HCI APIs, specifically those around changed file tracking, provide HYCU with an advantage over products backing up at the hypervisor layer. As Chris and Martin find out, the name HYCU was originally intended to mean Hyper-Converged Uptime, but the association with Haiku (the Japanese short poems) is much more interesting.  In fact, Martin created a HYCU Haiku during the recording: Martin’s Haiku: My data flowing, disaster strikes like lightning, polarity reverses You can find out more details on HYCU at https://www.hycu.com/ and as Subbiah explains, at go to https://www.hycu.com/tryhycu/ for a free 45-day download. Elapsed Time: 00:34:35 Timeline * 00:00:00 – Intros * 00:01:34 – Why does HCI backup justify a new way of working? * 00:03:30 – HCI introduces a new platform, not just a server to back up * 00:04:30 – Virtualisation changed the individual server backup paradigm * 00:05:30 – APIs for taking backups – HCI provide another access point * 00:07:00 – Where does the name HYCU and the company come from? * 00:11:30 – Martin creates a Haiku * 00:13:00 – How is the HYCU software deployed? * 00:15:00 – How do service providers use HYCU with Nutanix? * 00:19:40 – What differentiates HYCU from other backup vendors? * 00:21:30 – Taking data from VADP or from the HCI storage layer? * 00:23:55 – Applying backup to Nutanix Files (formerly AFS) * 00:25:30 – Nutanix uses the Changed File Tracking (CFT) API * 00:27:20 – Cross-cluster backup & restores and cross-hypervisor * 00:31:50 – How does HYCU licensing work? * 00:33:00 – Call to action – what’s next? * 00:33:50 – Wrap up Related Podcasts & Blogs * Validating HYCU Deployment By The Numbers * Comtrade Software Becomes HYCU and releases HYCU 3.0 * Automating HCI Data Protection * Why HCI Data Protection is Different * Briefing Document: HYCU Subbiah’s Bio Subbiah Sundaram is Vice President of Products at HYCU. He joined HYCU January of 2017 and has been instrumental in enabling the company to deliver the best in class multi-cloud solutions for Nutanix and Google Cloud. Prior to joining HYCU, Subbiah held senior executive positions at BMC, CA, DataGravity, EMC, NetApp and Veritas and has extensive experience in product development, planning and strategy. He holds a MS in Computer Engineering from the University of Iowa and an MBA from the Kellogg School of Management at Northwestern University. ​ Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode KUVM.

 #72 – Hitachi Vantara Looking Forward with Shawn Rosemarin | File Type: audio/mpeg | Duration: 21:28

This is the fourth and final podcast of four recorded at Hitachi NEXT 2018 in San Diego.  Previous episodes are listed below.  This conversation between Chris and Shawn Rosemarin, SVP and CTO of Global Field and Industry at Hitachi Vantara discusses how customer’s approach to managing their data has changed and how Hitachi will continue to evolve the Vantara brand during the next 12 months and beyond.  This discussion is interesting as it covers areas areas like acquisitions and the number of services Hitachi Vantara now offers.  Ultimately, as Shawn says, the goal is to help customers manage and grow their businesses. Shawn’s personal blog can be found at https://shawnrosemarin.com/. Elapsed Time: 00:21:28 Timeline * 00:00:00 – Intros * 00:01:30 – How are customers acting differently to manage their data? * 00:02:30 – Bridging the gap between hardware solutions and data management/ * 00:04:00 – HCP, previously HCAP, from Archivas acquisition/ * 00:05:45 – Finding ways to derive more value from content, like video. * 00:08:30 – Data lakes became data swamps. * 00:09:45 – Acquisition strategy, REAN Cloud announced in July 2018 * 00:12:30 – Avoiding data silos * 00:13:00 – What does multi-cloud mean if the clouds aren’t joined up? * 00:17:00 – What’s ahead for the next 12 months with Hitachi? * 00:18:30 – It’s Pen-tah-ho! * 00:19:00 – More services likely to appear from Hitachi Vantara * 00:21:00 – Wrap Up Related Podcasts & Blogs * #70 – Pentaho Data Aggregation with Arik Pelkey * #68 – Intelligent Object Storage with Scott Baker * #67 – Hitachi Vantara One Year On with John Magee Shawn’s Bio As part of the Executive Sales Leadership Team, Shawn holds direct accountability for all strategy and key talent across both Pre-Sales and Hitachi Vantara’s Industry Solutions Group. Prior to this role, Shawn was a Chief Technologist within the Americas Field CTO team at VMware and held various roles in Technology and Sales leadership. Shawn’s passion is working closely with customers to help them transform digitally with everything from private/hybrid cloud to end-user computing transformation and emerging IoT Solutions. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 0X78.

 #71 – IP EXPO 2018 – Scale Computing, StorMagic, Nasuni and Nephos | File Type: audio/mpeg | Duration: 44:45

This is the second of two podcasts recorded live at IP EXPO 2018 in London.  You can find the first here.  In this episode, Chris and Martin speak to Scale Computing, StorMagic, Nasuni and Nephos Technologies.  The guests are: * Aad Dekkers, Marketing Director, EMEA at Scale Computing * John Glendenning, SVP Sales and Business Development at StorMagic * Andy Hardy, Vice President, EMEA at Nasuni * Michael Queenan, Co-founder at Nephos Technologies Both Scale Computing and StorMagic offer products that work in the hyper-converged area and fit well with Edge computing requirements.  Nasuni is a global scale-out file system backed by the public cloud.  Nephos Technologies is an independent consultancy helping companies design and implement data frameworks.  As usual, the conversations were fast paced, although limited to 10 minutes each.  Thanks to all who took their time to speak with us over the course of the day and to TouchdownPR for organising. Elapsed Time: 00:44:45 Timeline * 00:00:00 – Intros * 00:00:30 – Aad Dekkers from Scale Computing * 00:01:30 – Virtualisation based on KVM and SCRIBE * 00:03:15 – Hyper-converged since 2012 * 00:05:30 – Solutions today fit well with edge computing requirements * 00:07:45 – Disclaimer – Chris has a Scale cluster in the lab! * 00:09:30 – Single nodes are also supported for specific edge cases * 00:11:30 – John Glendenning from StorMagic * 00:12:30 – SvSAN highly applicable for edge-type solutions * 00:15:00 – How do you deploy and service remote locations like wind farms? * 00:16:00 – Higher level functions (e.g. replication) are done with partners * 00:17:45 – Is there a trial version available? * 00:18:30 – What about not being tied to the hypervisor? * 00:20:00 – The ice cream has arrived! * 00:23:00 – Andy Hardy from Nasuni * 00:25:00 – Global scale-out NAS backed by the public cloud * 00:26:45 – Current focus is archive * 00:28:00 – Offering differentiated pricing for archive data without moving the content * 00:33:15 – Is that my wife?  No, it’s the timer! * 00:34:00 – Michael Queenan, Nephew Technologies * 00:34:45 – Is this hardware replacement or true data strategy? * 00:37:00 – How does Nephos stay independent? * 00:38:15 – Michael’s family are forced to listen to the podcast! * 00:40:00 – Nephos – ancient Greek for cloud * 00:41:00 – What are the biggest problems to solve today? Related Podcasts & Blogs * #69 – IP EXPO 2018 – Cloudian, E8 Storage, StorCentric and Zerto * Cloud as a Tier of Storage * Scale Computing Debuts HC3 in Google Cloud Platform Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode B62W.  

 #70 – Pentaho Data Aggregation with Arik Pelkey | File Type: audio/mpeg | Duration: 17:28

In this third podcast recorded live at Hitachi NEXT 2018, Chris talks to Arik Pelkey, Senior Director, Analytics Portfolio Product Marketing at Hitachi Vantara.  Arik is responsible for the Pentaho platform, an acquisition made originally by Hitachi Data Systems in 2015.  Pentaho is a data integration, visualisation and analytics platform used by data engineers to create data pipelines for AI models used by data scientists. The conversation between Chris and Arik focuses on how Hitachi Vantara engages with customers to build out complex data analytics solutions.  Arik wraps up the conversation with a summary of the new Pentaho features announced at the show, including support for Tensaflow, Jupyter Notebook and Python. Elapsed Time: 00:17:28 Timeline * 00:00:00 – Intros * 00:00:40 – What is Pentaho? * 00:02:30 – Analysing 80 new data types a month * 00:03:00 – How does customer engagement work with Big Data? * 00:05:00 – How much consulting is involved in selling solutions? * 00:07:00 – Who builds out and owns the data architecture? * 00:12:00 – How many new job titles and roles have developed? * 00:14:00 – What’s new in Pentaho * 00:17:00 – Wrap up Related Podcasts & Blogs * #68 – Intelligent Object Storage with Scott Baker * #67 – Hitachi Vantara One Year On with John Magee * #60 – New Data Economy with Derek Dicker Arik’s Bio Arik runs product marketing for Hitachi Vantara’s analytics portfolio where he helps clients solve big data integration, analytics and IoT problems. Arik is a frequent speaker on becoming a digital enterprise, IoT and predictive analytics and has a strong passion for helping customers modernize information architectures to turn new and emerging data sources into insights and business outcomes. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 0J6F.

 #69 – IP EXPO 2018 – Cloudian, E8 Storage, StorCentric and Zerto | File Type: audio/mpeg | Duration: 44:15

This is one of two podcasts, recorded live at IP EXPO 2018 in London.  The format is slightly different from what we’ve done in the past, in that we have four short 10-minute interviews with storage vendors.  Each talks about their company, products and industry problem being solved. The four guests are: * Steve Blow, Technology Evangelist with Zerto * Neil Stobart, VP Global Systems Engineering at Cloudian * Julie Herd, Director of Technical Marketing at E8 Storage * Read Fenner, VP Global Sales for Nexsan and Drobo & Mark Walker, Channel Sales Director (Nexsan) from StorCentric These interviews are recorded live, so there’s no editing and obviously with such a busy show floor, there is background noise.  Let us know if you think this format of production is something we should do more, and if there are any specific companies to interview. Elapsed Time: 00:44:15 Timeline * 00:00:00 – Intros * 00:00:30 – Zerto with Steve Blow * 00:01:00 – What is an evangelist? * 00:02:00 – Repositioning as an IT Resilience platform * 00:02:35 – What is the market focus for Zerto? * 00:03:40 – What are the new problems being solved? * 00:04:50 – Planning on making backup sexy again * 00:06:00 – What is Zerto’s cloud story? * 00:07:30 – Didn’t we have CDP before? * 00:08:30 – Why attend IP Expo? * 00:10:00 – Can people trial Zerto? * 00:11:00 – Cloudian with Neil Stobart * 00:12:00 – Why us an on-premises object storage platform? * 00:13:00 – What does an object store offer over file system? * 00:16:20 – Using Cassandra for platform scalability * 00:18:00 – What separates Cloudian from other vendors? * 00:22:30 – E8 Storage with Julie Herd * 00:22:50 – What problems does E8 Storage address? * 00:25:00 – What makes E8 different? * 00:26:00 – Why can’t traditional storage deliver? * 00:27:30 – Why do customers need to move to NVMe? * 00:29:30 – What next? New form factors and SmartNICs * 00:31:00 – What about beer? * 00:32:55 – StoreCentric with Read and Mark * 00:33:00 – Nexsan and Drobo together * 00:34:30 – Where did Nexsan fit in the enterprise? * 00:37:00 – Where are the Nexsan and Drobo synergies? * 00:39:00 – Core and Edge strategy * 00:41:00 – How does StorCentric evolve past the two brands? Related Podcasts & Blogs * Disaggregated Storage Part II with Zivan Ori from E8 Storage * Cloudian Acquires Infinity Storage * Storage Field Day 7 – Initial Thoughts * Hybrid Cloud and Data Mobility Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 7SMM.  

 #68 – Intelligent Object Storage with Scott Baker | File Type: audio/mpeg | Duration: 14:25

This is the second in a series of podcasts recorded at Hitachi NEXT 2018.  In this episode, Chris talks to Scott Baker, Senior Director of Product Marketing for Content and Data Intelligence at Hitachi Vantara.  The topic of conversation is how intelligent processing can be applied to content stored in Hitachi’s object storage platform, HCP.  HCI, Hitachi Content Intelligence allows data to be pre-processed in memory before storing on the HCP archive. Why is pre-processing so important?  Customers, including Rabobank (mentioned in the podcast) are using the technology to ensure compliance around GDPR and other regulations are being adhered to.  Scott mentions POPI (Protection of Personal Information Act) in South Africa and the California Consumer Privacy Act as other examples of regulations requiring increasingly stricter rules on the processing of personal information.  The conversation wraps up with a discussion of how Hitachi object storage can support public cloud, as customers move towards adopting hybrid and multi-cloud strategies. This podcast was recorded outside due to the level of background noise in the conference halls.  As a result, there is a little wind noise on the recording, although thankfully, no helicopters!  For more information on Hitachi Content Intelligence, visit the Hitachi Community website for HCI. Elapsed Time: 00:14:25 Timeline * 00:00:00 – Intro * 00:01:00 – Applying intelligence to object storage * 00:02:00 – Hitachi Content Intelligence * 00:04:00 – Exactly how is enrichment of data done? * 00:06:00 – How widespread is object data ETL? * 00:07:30 – How is Hitachi data management affected by GDPR? * 00:10:30 – What is Hitachi’s object storage public cloud strategy? * 00:12:00 – Data mobility – the big issue for cloud adoption * 00:13:00 – Wrap Up Related Podcasts & Blogs * Soundbytes: Talking Object Storage with Jeff Lundberg * #65 – Challenges in Managing Unstructured Data with Shirish Phatak * Object Storage Capabilities Series * Delivering File Protocols on Object Stores * Nine Critical Features for Object Stores Scott’s Bio If it involves a 1 and a 0, public speaking, coaching/mentoring, Scuba Diving, pre-’82 American Muscle Cars, or 80’s hair bands, then it feeds my passions and interests. I’ve worked in IT for longer than I want to admit and have held Software Engineering Director, Enterprise-level Software Applications Developer, Database Designer and Administrator, and Project Manager positions in a wide variety of highly technical markets. Not all of my professional life has been behind a compiler/designer – I’ve had the pleasure of holding field facing roles in technical marketing and specialized field engineering roles. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode 4O02.  

 #67 – Hitachi Vantara One Year On with John Magee | File Type: audio/mpeg | Duration: 13:19

This week, Chris is in San Diego attending Hitachi NEXT 2018, the annual user conference for Hitachi Vantara customers.  Hitachi Vantara was formed in 2017 from the merger of Hitachi Data Systems, Hitachi Insight Group and Pentaho.  Last year we spoke to Greg Knieriemen about the new company and the reasons behind the decision to move on from the HDS brand.  One year on from the launch of Vantara, Chris talks with John Magee, VP, Product & Solutions Marketing at Hitachi Vantara about how the transition is going. This podcast is one of a number recorded at the event and was recorded outside due to the level of background noise in the conference halls.  As a result, there is a little wind noise on the recording, although thankfully, no helicopters (the event hotel is close to the San Diego naval base). Elapsed Time: 00:13:19 Timeline * 00:00:00 – Intro * 00:01:00 – One year on, what’s the status? * 00:04:00 – What happened to the IoT story? * 00:05:00 – How has customer engagement changed? * 00:07:00 – How have managed services evolved? * 00:09:30 – What is this year’s event theme? * 00:11:00 – The hype or reality behind IoT * 00:13:00 – Wrap Up Related Podcasts & Blogs * Soundbytes #013: Hitachi Vantara with Greg Knieriemen * #57 – Storage on the Edge with Scott Shadley John’s Bio John leads Product and Solution Marketing for Hitachi Vantara’s portfolio of storage, data center and cloud infrastructure, IoT, and big data analytics. Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode RMMR.  

 #66 – Amazon RDS – Coming to a vSphere Cluster Near You | File Type: audio/mpeg | Duration: 30:37

Amazon Web Services (AWS) and VMware recently announced that RDS, the Relational Database Service was in tech preview as an on-premises deployment on vSphere.  Running a service outside of Amazon’s data centre represents a big change for the company.  The only non-core offering to date has been Snowball, the edge data appliance.  In this week’s recording, Martin and Chris discuss the implications of running a service on-premises, what’s in it for the customer, for VMware and for AWS. The database market is a lucrative business.  It’s reasonable to expect that end users will look to optimise their costs and to avoid those nasty annual audits of usage.  If businesses can get their heads around the idea of a service for which they have had to make the capital expenditure investment, then perhaps there’s an opportunity for significant savings. But how exactly will it work?  Will AWS have access to your vSphere clusters?  Will they remotely manage maintenance, upgrades and recovery? As well as the podcast, you can read more thinking in this blog post, also published today: AWS and VMware Partner to Bring RDS for the Enterprise Elapsed Time: 00:30:37 Timeline * 00:00:00 – Intros * 00:00:30 – VMworld 2018 and ruining bank holidays * 00:01:00 – AWS RDS – now available on-premises * 00:02:24 – What is the Relational Database Service? * 00:03:35 – Why (would DevOps) use RDS? * 00:04:30 – The licensing nightmare of databases * 00:05:00 – What’s bad about RDS – anything? * 00:06:00 – What SLAs do AWS offer for RDS? * 00:07:30 – The Instapaper outage * 00:09:00 – Who doesn’t like a good argument on storage with a DBA? * 00:09:35 – What is RDS on-premises, exactly? * 00:11:30 – How is Amazon exposed by allowing on-premises deployments? * 00:13:30 – How will pricing and billing work? * 00:15:00 – How do the commercial database vendors feel about this offering? * 00:19:20 – Who benefits from RDS on-premises? * 00:25:00 – What does Amazon/AWS gain here? * 00:26:30 – Is RDS on VMware just a big sales tool? * 00:29:14 – Wrap Up Related Podcasts & Blogs * AWS and VMware Partner to Bring RDS for the Enterprise * VMworld 2018 – Divergent VMware * VMware Cloud on AWS – What We Know So Far * Cloud Data Migration – Database Import Tools * Learning from The Instapaper Outage Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode TJZL.

 #65 – Challenges in Managing Unstructured Data with Shirish Phatak | File Type: audio/mpeg | Duration: 31:51

In this week’s podcast we focus on the issues of managing unstructured data in a distributed world.  Chris and Martin are joined by Shirish Phatak, CEO at Talon Storage. It’s interesting that “unstructured” proves to have a moveable definition, depending on what you want to include.  While we traditionally think of files and objects as unstructured, these so-called binary pieces of content typically do have structure within them.  In contrast, databases can be made up of unstructured data – e.g. files, that together take a structured form. Getting past the definition, we find that data growth is certainly dependent on the industry, with a minimum 20% annually, rising to as much as 100%.  As Martin points out, in his company, the assumption is that 80% of storage will be full within 6 months of deployment. With distributed data, we see processing at the edge and data management at the core.  In practical terms though, this can also mean moving data into the core for more analytics or match processing.  The conversation highlights how data consistency or concurrency is so important in a distributed environment.  It’s easy for users to simply copy and rename a file, throwing data management processes into confusion. Finally, the conversation moves to the public cloud, which at present seems to be acting simply as a large, easy to use repository. You can find Talon Storage here – https://www.talonstorage.com/ and Shirish on LinkedIn here. Elapsed Time: 00:31:51 Timeline * 00:00:00 – Intros * 00:01:00 – What is unstructured data? * 00:04:30 – Why is unstructured the source of new data growth? * 00:06:30 – Automated/background tasks creating data * 00:07:00 – To centralise or not centralise?  What data is actually useful? * 00:09:00 – How can you define security rules outside the data centre? * 00:12:00 – Increased volumes of data result in policies, not active management * 00:13:30 – Consistency and concurrency – enemies of distributed data * 00:16:00 – One true copy – but at the risk of performance? * 00:17:30 – You can’t fix stupid users! * 00:19:00 – Are filesystems at fault?  Do we need ILM (again)? * 00:22:30 – How is public cloud helping manage data? * 00:28:30 – Are there any standards or best practices we can follow? * 00:30:30 – Wrap up Related Podcasts & Blogs * #62 – The Future of Data Infrastructure with Scott Hamilton * #60 – New Data Economy with Derek Dicker * #57 – Storage on the Edge with Scott Shadley * DataGravity Pointed The Way to Data Rather than Storage Management * Conflating Data Protection and Data Mobility * Technology Choices for Data Mobility in Hybrid Cloud Shirish’s Bio Shirish Phatak is the Founder and CEO of Talon. Shirish has over 15 years of experience building scalable, high performance systems that solve mission critical information technology challenges.  Shirish was Chairman of the Board and Co-founder of Velocius Networks,

 #64 – Success & Failure in Storage Startup Land | File Type: audio/mpeg | Duration: 31:34

This week’s conversation follows up on Chris’ recent visit to Flash Memory Summit in the US.  Chris and Martin discuss the storage startup landscape and the range of companies appearing at the event. What makes a company successful?  Is IPO or acquisition the right route?  The discussion starts with a simple, yet tricky question – why does storage continue to be such a diverse market place, with so many solutions to problems?  We see a storage “pendulum” effect, with vendors moving between hardware and software.  At the moment, there seems to be more focus on hardware solutions. The conversation moves to the wider industry, talking about the whole process of acquisition.  Who is left to acquire these days, when the big incumbents already have most of their storage portfolio in place?  Could hyperscalers, with their deep pockets, be good targets for an acquiree? Looking across the market we see companies that have failed to make it.  Then there are the phoenixes, coming back from the dead.  There are the zombies who continue to exist without moving the market or reaching acquisition.  Of course there are also the darlings who get acquired and attract all the money. What makes success?  Having a product that solves a real world problem, but at the same time is marketed well.  Apparently having a good colour scheme also helps! Elapsed Time: 00:31:34 Timeline * 00:00:00 – intros * 00:00:10 – Flash Memory Summit Recap * 00:02:20 – Why does storage remain so diverse? * 00:03:00 – The storage pendulum of hardware and software * 00:04:30 – Samsung Galaxy Note – 1TB in your pocket! * 00:05:00 – MRAM in IBM’s FlashModule drives * 00:06:00 – Acquisition or IPO for startups – which is best? * 00:07:30 – Storage startups need investors with deep pockets * 00:08:30 – The thing about hardware is that it’s hard…. * 00:09:10 – Who wants to acquire storage array companies? Anybody? * 00:10:00 – Maybe startups need to create complementary technologies? * 00:11:20 – How about acquisition by the hyperscalers? Big pockets? * 00:12:30 – Hyperscalers don’t need to commercialise products in the same way. * 00:14:15 – The dream of the frictionless sale * 00:14:20 – Storage companies that have died – Primary Data, Coho Data, Starboard * 00:16:00 – The use case of products, like Diablo Technologies * 00:17:30 – The Phoenixes – Coraid, Tintri, Violin Systems/Memory * 00:20:00 – The perpetuals – X-IO, Nimbus Data, Dot Hill * 00:22:00 – So what is the right exit? * 00:23:00 – Success?  It’s based on branding colour * 00:25:00 – Why not just invent a new market space like Rubrik/Cohesity? * 00:26:00 – If you’re profitable, why bother float (unless as an exit strategy)? * 00:27:30 – What defines success?  Fixing a real problem. * 00:30:40 – Wrap up. Related Podcasts & Blogs * Soundbytes #012: The Resurrection of Violin Systems with CEO Ebrahim Abbasi * #43 – All-flash Market Review 2018 with Chris Mellor * #57 – Storage on the Edge with Scott Shadley * What Next for Violin Systems? * Can Violin Systems...

 #63 – Datrium CloudShift | File Type: audio/mpeg | Duration: 23:07

In this episode, Chris and Martin catch up with the Datrium team to discuss CloudShift, Datrium’s new SaaS DR offering.  CloudShift provides the capability to use backups that have been written to Cloud DVX and fire up a VMware Cloud on AWS instance, recovering the backups into the vSphere cluster.  CloudShift can also be used to do DR between primary and secondary DVX environments. As the conversation evolves, Chris and Martin investigate exactly how an on-premises environment is replicated to the cloud.  CloudShift requires initial configuration and then tracks virtual machines for compliance against the expected configuration.  What’s interesting is using VMC (VMware Cloud on AWS) as a DR target.  Although Datrium’s experience is that an environment can be build in around an hour, there’s no specific SLA.  This of course, could have an issue for RTO (recovery time objective). Sazzala mentions their VMworld 2018 presentations.  Here they are: Battle Royale: SysAdmin vs DevOps Engineer [CODE5557U] – Monday, Aug 27, 11:00 a.m. – 11:30 a.m. | Power Session Theater in the VMTN Lounge Existing Choices to Leverage VMware Cloud on AWS (VMC) for DR [VMTN5977U] – Thursday, Aug 30, 11:00 a.m. – 11:30 a.m. | VMTN Theater in the VMTN Lounge The DR site is dead, long live DR! On-demand DR on VMware Cloud on AWS. [VMTN5618U] – Tuesday, Aug 28, 12:00 p.m. – 12:30 p.m. | VMTN Theater in the VMTN Lounge Automatic Failover to AWS When a Wildfire Approaches Your Data Center [HYP3720BUS] – Tuesday, Aug 28, 11:30 a.m. – 12:30 p.m. | Islander C, Lower Level Faster Home Loans on VMware vSphere Mean More Financial Services Revenue [VAP1454BU] – Tuesday, Aug 28, 4:00 p.m. – 5:00 p.m. | Breakers J, Level 2 Elapsed Time: 00:23:07 Timeline * 00:00:00 – Intros * 00:01:00 – What is CloudShift? * 00:02:00 – Recap – what is DVX and Cloud DVX? * 00:04:00 – CloudShift in more detail – site to site & site to cloud * 00:06:30 – Using VMC (VMware Cloud on AWS) for operational consistency * 00:07:24 – How does Datrium make money from this service? * 00:09:00 – DR is about getting to the DR site – and getting back… * 00:10:30 – Automated DR testing – scheduling regular tests * 00:12:40 – What about RTO?  How quickly does failover work? * 00:16:00 – How does VMC get configured to match the production site? * 00:17:40 – Cloud DVX was SaaS, so CloudShift extends the SaaS model. * 00:18:00 – What is next? Data lifecycle. * 00:20:00 – SaaS – could this be the future for storage vendors? * 00:21:00 – VMworld presentations * 00:22:00 – Wrap Up Related Podcasts & Blogs * #48 – Introduction to Datrium DVX (Sponsored) * #43 – All-flash Market Review 2018 with Chris Mellor * Garbage Collection #005 – Disaggregated Storage * Modern Storage Architectures: Datrium * Why Deterministic Storage Performance is Important Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission.

 #62 – The Future of Data Infrastructure with Scott Hamilton | File Type: audio/mpeg | Duration: 21:56

This is the third of a series of three podcasts recorded at Flash Memory Summit 2018.  In this conversation, Chris talks to Scott Hamilton, Senior Director of Product Management, DCS group at Western Digital Corporation.  WDC are working on a new architecture that will deliver a composable data infrastructure for the enterprise.  This podcast discussed why composable is needed and exactly what scale of customers will benefit from the disaggregation of compute and storage. WDC are not looking to move into the general infrastructure market with this solution, but rather are developing an open standard to which other vendors can design and integrate products.  This is also intended for large-scale customers that have hundreds of petabytes to exabytes of storage to process. The discussion covers some of the issues we’re going to encounter over the next few years, as analytics (whether ML/AI or otherwise) become more widespread.  Data is the new renewable resource, with continual improvements in algorithms set to allow data to be processed and reprocessed over time. The conversation ends with a brief mention of products.  More details on the OpenFlex platform, F3000 and D3000 offerings can be found online here. Elapsed Time: 00:21:56 Timeline * 00:00:00 – Intros * 00:01:50 – Data is the new oil?  Not really…. * 00:03:00 – Defining big data and fast data * 00:05:30 – Managing data at exabyte scale – how will we do it? * 00:07:00 – Why composable as a new architecture? * 00:09:00 – Exactly how does WD view what composable means? * 00:11:00 – Will WD be selling servers with the OpenFlex architecture? * 00:14:00 – Disaggregation is the key to composable * 00:16:00 – Was SAN just the first form of disaggregation? * 00:18:00 – OpenFlex F3000 (performance) & D3000 (capacity) * 00:19:30 – NVMe-oF – not just performance – uniform access * 00:21:00 – Wrap Up Related Podcasts & Blogs * #61 – Introduction to NVM Express with Amber Huffman * #60 – New Data Economy with Derek Dicker * #56 – Defining Scale-Out Storage * #55 – Storage for Hyperscalers * The Ideal of Composable Infrastructure * Composable IT – HPE’s Next Big Thing Copyright (c) 2016-2018 Storage Unpacked.  No reproduction or re-use without permission. Podcast Episode U1E3.

 #61 – Introduction to NVM Express with Amber Huffman | File Type: audio/mpeg | Duration: 13:27

This is the second of a series of podcasts recorded at Flash Memory Summit 2018 in Santa Clara.  In this episode, Chris talks to Amber Huffman, Intel Fellow and President and founder of NVM Express Inc.  NVM Express is the standards body that governs the development of the NVMe base standard, NVMe-MI (Management Interface) and NVMe over Fabrics.  Amber explains how standards’ bodies are initially established, including the ongoing ownership of intellectual property.  The discussion continues, looking at how NVM Express standards are developed by committee, how the body is funded and how the process of wider education in the industry is managed. More details on NVM Express can be found at https://nvmexpress.org/, or by following the NVM Express Twitter account (https://twitter.com/nvmexpress). Update: Amber was awarded the inaugural SuperWomen in Flash Leadership Award at FMS 2018.  Read more on this recognition here. Elapsed Time: 00:13:27 Timeline * 00:00:00 – Intros * 00:02:00 – What is NVM Express? * 00:03:30 – How does an industry body get funded? * 00:04:30 – Who owns the standards? * 00:05:30 – Is NVM Express hardware or software focused? * 00:06:00 – What is NVMe-MI? * 00:08:00 – What specifications exist? * 00:09:00 – How are the standards developed? * 00:10:30 – What details of specifications are available online? * 00:11:30 – Training and education, how does NVM Express help? * 00:12:20 – How well adopted is NVMe in the enterprise? Related Podcasts & Blogs * #59 – Ethernet vs Fibre Channel * #43 – All-flash Market Review 2018 with Chris Mellor * #37 – State of the Storage Union with Chris Mellor * NVMe over Fabrics – Caveat Emptor? * The Race towards End-to-End NVMe in the Data Centre * Has NVMe Killed off NVDIMM? Amber’s Bio Amber Huffman is an Intel Fellow and the director of storage interfaces in the Non-Volatile Memory Solutions Group at Intel Corporation. She leads the development of storage interfaces and works to integrate the resulting technology in Intel products, with a focus on furthering Intel’s non-volatile memory (NVM) business initiatives. A respected authority on storage architecture, Amber has used her expertise and influence to lead Intel and the storage industry toward the definition and adoption of fast, streamlined, highly power-managed and low-latency storage interfaces. Her leadership role in industry standards efforts includes forming and chairing the NVM Express (NVMe) Workgroup (nvmexpress.org), a consortium of companies working to define a standardized interface for PCI Express-based solid-state drives. Amber chairs the board of directors for the NVMe Workgroup and the Open NAND Flash Interface (ONFI) Workgroup (onfi.org); both groups are coalitions of more than 100 technology companies.

 #60 – New Data Economy with Derek Dicker | File Type: audio/mpeg | Duration: 22:13

This week Chris is attending the Flash Memory Summit in Santa Clara.  This is the first of three podcast recordings from the event and is a conversation with Derek Dicker CVP and GM of the Storage Business Unit at Micron. Derek participated in a keynote session at the event and talked about the challenges of managing new data types.  This includes processing data coming from a range of new sources, as well as providing capabilities to do new processing like analytics at the core and edge.  The interesting part of the discussion is how ML/AI system designs will require significantly more DRAM as well as flash.  Not surprisingly, Micron is expecting their new enterprise QLC drive to be a basis for managing future capacity. In this recording we were able to resolve the origins of the Idaho Spud, a delicious piece of confectionery that usually accompanies Calvin Zito on his travels.  Thanks for clearing that one up Derek! Derek’s presentation was entitled QLC Flash: Meeting the Challenges of the New Data Economy and linked here. More details on Micron storage products can be found through their Twitter and LinkedIn accounts. Elapsed Time: 00:22:13 Timeline * 00:00:00 – Intros * 00:01:00 – The New Data Economy * 00:02:30 – 40th Anniversary of Micron * 00:03:30 – The Idaho Spud – funding from agriculture * 00:04:30 – Where is new data coming from? * 00:07:15 – Everyone is focused on data with mobile * 00:08:15 – Cars will generate and store masses of data * 00:12:00 – AI – driving demand for memory and storage * 00:15:00 – DRAM moving to high density with DDR4 & DDR5 * 00:16:00 – QLC the next big thing in NAND flash * 00:19:15 – Avoiding the 3D-XPoint question! * 00:20:00 – Wrap up Related Podcasts & Blogs * #57 – Storage on the Edge with Scott Shadley * #55 – Storage for hyperscalers * #44 – Ultra High Capacity Flash Drives * Micron ushers in the era of QLC SSDs * Flash Diversity: High Capacity Drives from Nimbus and Micron * QLC NAND – how real is it and what can we expect from the technology? Derek’s Bio Derek Dicker is Corporate Vice President and General Manager of the Storage Business Unit at Micron Technology. He is responsible for building world-class storage products and value-added solutions based on Micron’s nonvolatile memory technology to address the growing opportunity in large market segments like cloud, enterprise and client computing. Mr. Dicker has 20 years of experience in the semiconductor industry, including sales, marketing and executive roles at Intel, Integrated Device Technologies and PMC-Sierra (acquired by Microsemi Corporation). Most recently, he served as vice president and business unit manager of performance storage at Microsemi, where he led a global organization and drove all general management functions. Mr. Dicker earned a bachelor’s degree in computer science and engineeri...

Comments

Login or signup comment.