Storage Unpacked Podcast show

Storage Unpacked Podcast

Summary: Storage Unpacked is a technology podcast that focuses on the enterprise storage market. Chris Evans, Martin Glassborow and guests discuss technology issues with vendors and industry experts.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Storage Unpacked Podcast
  • Copyright: Copyright © 2016-2021 Brookend Ltd

Podcasts:

 #179 – The Myth of Cheap Cloud Storage | File Type: audio/mpeg | Duration: 30:58

Everyone thinks cloud storage is cheaper than on-premises, but is that really true? When cloud storage like AWS S3 was first introduced, vendors dropped their $/GB prices on a frequent basis. However, AWS hasn’t reduced the base price of S3 in five years, instead preferring to offer cheaper tiers of storage. In this week’s episode, Chris and Martin debate the reasons for the lack of reductions, especially in light of continuing growth in HDD capacities and reducing costs. Elapsed Time: 00:30:58 Timeline * 00:00:00 – Intros* 00:02:00 – Is cloud storage cheaper than on-premises?* 00:05:05 – What new storage services has AWS introduced?* 00:07:00 – There are complex rules around data data retrieval* 00:08:40 – Cloud is driving the need for FinOps skills* 00:10:33 – Data mobility can be lost with egress charges* 00:12:30 – Access patterns (like sub-object) are hard in cloud* 00:13:10 – Why have base storage prices not declined?* 00:17:00 – Cloud providers could be tiering behind the scenes* 00:18:52 – Should you build your own storage cloud?* 00:20:00 – IOPS (block) data needs to be treated separately * 00:21:40 – On-premises storage lets you sweat the hardware assets * 00:23:00 – Data is fixed, compute is ephemeral * 00:26:30 – Understand your data profiles * 00:28:00 – Cloud forces you to manage your data * 00:30:00 – Wrap Up Related Podcasts & Blogs * Is AWS passing on the benefits of storage media price reductions?* #166 – Infinidat Elastic Storage Pricing with Eran Brown* #55 – Storage for Hyperscalers Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #6illj.

 #178 – Monetising the Value of Data | File Type: audio/mpeg | Duration: 32:53

This week, Chris has a great conversation with Bill Schmarzo, Chief Innovation Officer at Hitachi Vantara. Bill maintains a consultancy practice within Hitachi that helps customers build processes and identify data that can be used to create business value within an organisation. In this discussion, Bill outlines the process for identifying opportunities, capturing the data and building governance around translating data into income. Aside from the insights in how businesses can generate new revenue streams from the data in their organisation, this podcast highlights the challenges of determining which data to keep and discard, while attributing value to data being protected. We assume all data needs to be protected equally, but perhaps there needs to be an alignment between data management and data value. Bill has three books in print with a fourth (The Economics of Data Analytics and Digital Transformation) expected in November 2020. During the podcast, Bill references the Prioritisation Matrix. This blog post provides more information on the concept and process (explaining what we couldn’t visualise on an audio podcast). You can catch up with Bill on LinkedIn and Twitter. Here’s a link to Bill’s second book, The Big Data MBA, referenced at the end of the podcast. Elapsed Time: 00:32:53 Timeline * 00:00:00 – Intros * 00:02:16 – How is data monetised? * 00:03:30 – What data is worth retaining or discarding? * 00:05:00 – Data creators and owners can be very diverse * 00:06:30 – Create the business need then locate the data * 00:08:10 – Pick one idea and prove it out – Proof of Value * 00:09:45 – How is governance managed in a Data Lake? * 00:11:00 – How is data management cost aligned with data value? * 00:14:00 – Feedback loop – review the prioritisation matrix * 00:16:00 – Economies of learning are greater than economies of scale * 00:18:50 – Data is a unique business asset * 00:20:00 – How is valuable data protected? * 00:24:00 – Data is the new oil? Almost but it’s infinite * 00:26:00 – Why teach? * 00:28:30 – How does AI empower people in the workplace? * 00:30:40 – Wrap Up Related Podcasts & Blogs * #103 – Data Management and DataOps with Hitachi Vantara* #81 – Storage or Data Asset Management?* The $2 billion Gamble on Data Management Bill’s Bio Bill Schmarzo is regarded as one of the top Digital Transformation influencers on Big Data and Data Science. In addition to his role as Chief Innovation Officer at Hitachi Vantara, Bill is a University of San Francisco School of Management (SOM) Executive Fellow and an Honorary Professor at the School of Business and Economics at the National University of Ireland-Galway. His career spans over 30 years in data warehousing, BI and advanced analytics. Bill formerly served as CTO of Big Data

 #177 – SmartNICs and Project Monterey | File Type: audio/mpeg | Duration: 36:13

This week Chris and Martin look at SmartNIC technology and the announcement of Project Monterey. SmartNICs are offload devices that provide networking, storage and security functions with additional benefits such as centralised management. VMware has announced Project Monterey, a preview solution that takes SmartNICs and offloads storage and networking tasks from the ESXi hypervisor. It’s clear from the discussion that SmartNIC technology needs to provide a “10x” benefit to the data centre. The technology is already in use by hyperscalers to deliver Public Cloud. With Monterey and ESXi 7, the private data centre can use a similar centralised management approach. How will this work with existing infrastructure? VMware has ported ESXi to ARM and intends to run services on ESXi on the SmartNIC card. There are so many questions this raises, so join us for a fascinating discussion on this technology. Elapsed Time: 00:36:13 Timeline * 00:00:00 – Intros * 00:01:30 – Project Monterey – not the IBM AIX port… * 00:03:00 – What is a SmartNIC? * 00:04:05 – Are SmartNICs new or reinvention of existing technology? 00:04:40 – Mainframes did offload first * 00:08:00 – Division of responsibilities – team wars? * 00:11:00 – What benefits do SmartNICs bring? * 00:14:25 – Digging deeper into Project Monterey * 00:15:31 – Why put ESXi on ARM? * 00:18:00 – How will hot pluggable work? * 00:22:00 – Here’s where Project Monterey could be useful * 00:24:00 – Could SmartNICs aid the adoption of Arm for primary workloads? * 00:27:40 – Could Composable see a big benefit from SmartNICs? * 00:30:00 – Return of the mainframe? * 00:31:55 – On reflection, are SmartNICs worthwhile? * 00:35:00 – Wrap Up Related Podcasts & Blogs * #96 – Discussing SmartNICs and Storage with Rob Davis from Mellanox* ESXi on Raspberry Pi* VMware Project Monterey – First Impressions* Fixing the x86 Problem Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #8j46.

 #176 – Opinionated Storage Opinions with Chris Mellor | File Type: audio/mpeg | Duration: 54:08

This week, Martin and Chris catch up with Chris Mellor in his “pandemic infused isolation”. Without a particular theme, this episode looks at the unchanging landscape of tape and media, Dell EMC building another new storage platform and Nutanix still not making a profit. In the future zone, we speculate on AMD and Micron getting together to build an Optane challenger, the chances of Kubernetes storage being successful and whether Nebulon has a new architecture with cloud-defined storage. Naturally the conversation is interspersed with many tangents and diversions, but hey, it wouldn’t be opinion without it! This recording is way longer than normal and for that we sincerely apologise, and hope you enjoy it – keep listening to the end though, as every minute is worth your time. As usual, if you disagree or have any comments, just get in touch. Elapsed Time: 00:54:08 Timeline * 00:00:00 – Intros * 00:01:00 – Reading the weather on Countryfile * 00:02:20 – Some things never change * 00:02:40 – LTO9 slowing the capacity paradigm * 00:07:00 – 3480 cartridges were in megabytes * 00:08:10 – Pushing 20TB drive capacities * 00:09:19 – Martin takes it back to tape * 00:10:00 – How do tapes take such a beating? * 00:12:00 – Pollution and computer equipment * 00:13:30 – Another Dell EMC storage platform? * 00:19:00 – We need to play Storage buzzword bingo! * 00:19:30 – Is Intel making a profit on Optane? * 00:22:00 – Could AMD pair up with Micron on 3D-XPoint? * 00:26:00 – Could AMD/Micron succeed in the console space? * 00:28:00 – Is a new Nutanix CEO change the cost of new money  * 00:30:00 – Now emerging solutions/technologies * 00:30:05 – What does the Pure/Portworx acquisition mean? * 00:33:00 – Could Kubernetes be the next OpenStack? * 00:35:30 – NetApp has rebranded! * 00:37:00 – Storage on a Stick anyone? (Nebulon) * 00:40:02 – Most storage is the same generic technology * 00:45:00 – Stop the press!  Charlie’s closing down Pure! * 00:54:00 – Wrap Up Related Podcasts & Blogs * #158 – Midrange Teardown with Chris Mellor* #138 – Storage Predictions for 2020 (Part I)* #139 – Storage Predictions for 2020 (Part II) Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #5cgb.

 #175 – IBM FlashSystem Deep Dive (Sponsored) | File Type: audio/mpeg | Duration: 41:30

This week Chris catches up with Ralf Colbus from IBM to talk about the evolution of FlashSystem. The FlashSystem platform is an enterprise-class block-based storage solution that scales from SMB/SME offerings to high-end all-NVMe and SCM capable devices. At the heart of the design is the software behind Spectrum Virtualise, the SAN Volume Controller or SVC. FlashSystem now offers a standardised portfolio based on SVC. This starts with the 5010 and 5030, both SAS-based solutions that can be all-flash or hybrid flash & HDD. The 5100 upwards (including 7200 and 9200) introduces NVMe and IBM’s FlashCore modules. FlashCore is based on technology developed from the TMS acquisition. FlashCore is effectively a custom SSD, based purely on hardware and with in-built FPGAs that deliver features which include hardware-based encryption. FlashCore also uses MRAM to eliminate super-capacitors (see episode #159). Capacities scale from 4.8TB to 38.4TB per modules and up to 2PB in 2U. An interesting aspect in the evolution of FlashSystem is the strategy to put pricing online. We’ve talked about transparent pricing recently (episode #173) and it’s good to see IBM both being open about the cost of solutions, while offering new, flexible terms, including “pay as you go”. To learn more, check out the IBM Storage Digital Platform. Elapsed Time: 00:41:30 Timeline * 00:00:00 – Intros * 00:01:00 – Where did SVC & FlashSystem develop from? * 00:03:06 – Storwize brought a brand name and some IP * 00:03:55 – IBM Standardized the portfolio around SVC * 00:05:00 – Low-end FlashSystem can be hybrid storage (HDD & SSD) 00:06:26 – 5100 models upwards use NVMe in place of SAS * 00:07:20 – 5100 onwards introduces hardware-based encryption * 00:08:06 – FlashCore – custom flash SSD modules based on TMS IP * 00:10:00 – FlashCore is all hardware with FPGAs * 00:13:00 – SCM/PMEM (Optane) can be used as a tier or for pinned LUNs * 00:17:00 – FlashSystem scales from two to eight nodes (4-way) * 00:21:00 – SVC is used within FlashSystem as the software layer * 00:23:00 – SVC (and so FlashSystem) offers the capability of 100% uptime * 00:25:00 – FlashSystem supports CSI and automation tools like Ansible * 00:28:45 – How has IBM changed their FlashSystem pricing strategy? * 00:32:00 – IBM offers flexible “on-demand” pricing * 00:35:00 – How has IBM evolved Storage Resource Management tools? * 00:39:00 – IBM is using Watson to create Storage AIOps insights * 00:40:25 – Wrap Up Ralf’s Bio Ralf Colbus is Chief Storage Strategist for IBM Systems, designing secure and cost-effective data storage solutions for customers and business partners in the DACH (Germany, Austria, Switzerland) region. He has over 20 years of experience working in data storage across several industries, and is very passionate about helping clients understand the power of data in supporting strategic decisions which will give them competitive advantage. Outside the world of data, he enjoys music, good wine, playing golf, and advocating for environmental causes such as climate change. Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #gc78.

 #174 – Introduction to Zoned Storage with Phil Bullinger | File Type: audio/mpeg | Duration: 45:08

This week, Chris and Martin chat to Phil Bullinger, Senior VP and General Manager for the Data Centre Business Unit at Western Digital. As storage media capacities increase, recording methods are introducing challenges to maintaining resiliency and performance. SMR (Shingled Magnetic Recording) and ZNS (Zoned Namespaces) are two techniques that have developed to address the scaling issues in modern media devices. SMR is a technique for hard drives that overlays the recording area of tracks on storage media to gain increased areal density. This results in a requirement to re-write entire blocks or zones of disk that could be tens of megabytes in size. ZNS divides solid-state media into zones to improve I/O performance and increase device endurance. This is achieved through reduction in write amplification and reduced garbage collection. With both media, the recording techniques continue to help reduce the cost of devices and maintain the downward trend of $/GB for modern media. This discussion introduces us to the benefits and some of the challenges. If you want to learn more, Western Digital sponsors an independent website at http://zonedstorage.io/. Elapsed Time: 00:45:08 Timeline * 00:00:00 – Intros * 00:02:00 – What are today’s storage & data challenges? * 00:04:20 – HDDs & SSDs continue to grow in capacity but have challenges * 00:06:40 – I/O density represents a big problem for today’s media * 00:09:00 – What is zoned storage? * 00:12:41 – HDD media grains are driving new ways of reading/writing * 00:14:00 – SMR overmap * 00:16:50 – Zoned storage is designed for sequential workloads * 00:19:05 – What is ZNS – Zoned Namespaces? * 00:21:00 – SSDs try to emulate hard drives * 00:23:40 – ZNS can simplify the operation of reads and writes * 00:26:00 – ZNS can make SSD performance more deterministic * 00:28:40 – SMR has drive managed, host managed or host aware * 00:32:00 – Data centre builds are al about TCO, including storage * 00:34:34 – Will SDS see greater benefits from zoned storage? * 00:38:50 – Most data is written once and never amended * 00:42:00 – ZNS will improve the endurance of QLC * 00:43:00 – Wrap Up Related Podcasts & Blogs * #113 – The Expanding Storage Hierarchy* #55 – Storage for Hyperscalers* #161 – Seagate MACH.2 Dual Actuator Drive Deep Dive* Managing Massive Media Phil’s Bio Phil Bullinger is Senior Vice President and General Manager of the Data Center Business Unit for Western Digital where he focuses on accelerating the growth and performance of the company’s broad portfolio of data center disk and flash products.   Previously, Bullinger was Senior Vice President and General Manager at Dell EMC, where he was responsible for the Isilon product line, including product planning, hardware and software engineering, production operations and customer support. He also held executive positions at Oracle as their Senior Vice President of SAN/NAS Storage, and at LSI as the Executive Vice President and General Manager for the Engenio Storage Group. Bullinger’s

 #173 – Transparent Enterprise Storage Pricing | File Type: audio/mpeg | Duration: 35:05

Enterprise storage pricing has all the simplicity of a mobile phone tariff. Vendors love to obfuscate the costs, whereas prospective purchasers just like a good, honest price. Why does enterprise storage pricing have to be so complicated and can’t we just have pricing online? Chris and Martin chat to George Crump from StorONE about strategies for pricing from both the customer and vendor perspective. Vendors mentioned in this podcast: StorONE, IBM, NetApp, Microsoft Azure, Pure Storage, Dell EMC. Find more about StorONE at https://www.storone.com. Elapsed Time: 00:35:05 Timeline * 00:00:00 – Intros * 00:02:00 – Enterprise storage pricing is very ambiguous * 00:03:20 – Hotel room rates and storage pricing are made up * 00:04:50 – How do vendors set list price? * 00:05:40 – Sales cycles drive pricing on quarterly & annual basis * 00:07:00 – Do “good” customers get a better deal? * 00:09:30 – Is being transparent on price a weakness? * 00:11:30 – Storage value is in the software * 00:12:00 – Software pricing is generally done by capacity * 00:14:00 – Subscriptions allow vendors to add value through new features * 00:18:00 – Should features be inclusive or itemised? * 00:20:10 – Cloud is training people to adjust to itemised pricing * 00:27:00 – So what does pricing in the cloud look like? * 00:29:53 – Customers need to be ready to buy * 00:34:00 – Wrap Up Related Podcasts & Blogs * #166 – Infinidat Elastic Storage Pricing With Eran Brown* #164 – Introduction to StorONE S1: AFAn* #130 – Making Money in the Storage Business Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #olcc.

 #172 – Tintri SQL Integrated Storage | File Type: audio/mpeg | Duration: 34:03

This week, Chris and Martin talk to Shawn Meyers, Field CTO at Tintri about SQL Integrated Storage. The Tintri VMstore platform originally provided the ability to apply policy-based management to virtual machines on shared storage. This capability has now been extended to databases, in particular Microsoft SQL Server. SQL Integrated Storage (or SIS) works by exposing an SMB share from the VMstore platform onto which database files are stored. VMstore is provided awareness of the SQL database structure and can therefore manage the QoS and data management requirements of individual files that comprise a single database. What’s interesting about this technology is the promise for further abstraction of application data away from virtual machines. The files of a database can now easily be mapped to a container or physical machine as required. Data can be cloned and replicated very easily. The technology can also be extended to other structured data solutions. Learn more about SIS here – https://tintri.com/solutions/sql-integrated-storage/ Elapsed Time: 00:34:03 Timeline * 00:00:00 – Intros * 00:02:00 – Tintri is now part of DDN * 00:03:00 – What is VMstore? * 00:05:20 – VMstore applies VM-level QoS * 00:07:00 – vSAN doesn’t do VM-based QoS * 00:09:00 – SQL Integrated Storage offers database-level QoS * 00:10:45 – How is SIS managing data? SMB * 00:12:50 – What databases are supported?  MS SQL Server * 00:15:00 – NFS faster than Fibre Channel?  Never! * 00:16:51 – What does SIS enable? * 00:19:30 – Separation of data and code * 00:20:30 – SIS is a data mobility solution * 00:24:00 – SIS could provide minute-based SQL snapshots * 00:25:43 – What is the customer feedback so far? * 00:27:40 – SIS could enable quick post-replication checks * 00:29:15 – Obligatory mainframe reference! * 00:31:09 – Call to action * 00:32:30 – Wrap Up Related Podcasts & Blogs * Private Cloud Storage and The Tintri Platform* #168 – Storage Unicorns* #110 – Storage Vendor Consolidations & Acquisitions* Whatever Happened to VVOLs? Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #hlo4.

 #171 – Exploiting Persistent Memory with MemVerge | File Type: audio/mpeg | Duration: 39:10

This week the team double down on the topics of in-memory computing and persistent memory. Chris and Martin talk to Charles Fan, CEO at MemVerge about Big Memory and using persistent memory technology (specifically Optane) to supplement system DRAM. DRAM is expensive and as capacities scale linearly, the price of memory increases exponentially. Systems are limited by maximum addressable memory per socket. Persistent Memory in the form of Intel Optane provides the capability to massively increase the virtual memory footprint, using a combination of DRAM and Optane DIMMs. How is that memory managed – this is where MemVerge comes in. MemVerge uses a feature called LD_PRELOAD to intercept memory management calls and direct requests to the MemVerge software. This means MemVerge can manage applications sitting across real DRAM and Optane. It also means the company can introduce neat tricks like memory snapshots and application rollbacks. This is a great conversation and serves as a great introduction in what we can expect of in-memory computing in the future. Find more on MemVerge at https://www.memverge.com/ Elapsed Time: 00:39:10 Timeline * 00:00:00 – Intros * 00:02:15 – Real-time computing is driving in-memory computing * 00:03:00 – DRAM is not scalable and expensive * 00:05:30 – Applications can hit a cliff edge when in-memory capacity is exhausted * 00:07:45 – Persistent Memory (like Optane) is byte-addressable * 00:09:30 – Memory capacities scale linearly and costs increase exponentially * 00:11:00 – How are systems going to balance DRAM and PMEM access in a single server? * 00:13:00 – Application rewrites are not popular… * 00:15:30 – How is PMEM identified by the O/S? * 00:16:45 – MemVerge implements “software-defined memory” – LD_PRELOAD * 00:19:30 – How is MemVerge technology implemented? * 00:21:30 – What algorithms are used to manage DRAM and PMEM? * 00:24:45 – Test figures show performance can be better than DRAM * 00:28:00 – How do you use the persistence of PMEM effectively? * 00:30:00 – MemVerge offers application checkpointing – memory snapshots * 00:33:00 – Snapshots in memory and on disk are co-ordinated * 00:36:45 – Persistent Memory will be a game-changer for in-memory computing * 00:38:00 – Wrap Up Related Podcasts & Blogs * #169 – In-Memory Computing and Apache Ignite* #36 – The Persistence of Memory with Rob Peglar* #159 – Introduction to MRAM with Joe O’Hare from Everspin* Persistent Memory in the Data Centre* What are Storage Class Memory (SCM) and Persistent Memory (PM)? Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #747k.

 #170 – The End of Pure Play Storage Companies | File Type: audio/mpeg | Duration: 43:11

This week, Chris and Martin debate whether we have seen the end of new “pure play” storage companies that go the distance to full independence. Where there used to be many businesses like EMC, Pure Storage and NetApp, the number of public storage-only companies is dwindling. Is this market too challenging to get into or is there simply no money to be made? If we look back 30 years, storage hardware was all the rage. Storage solutions were complex with custom hardware. These days, anyone can build a software-based solution and open source for everyone to use. We’ve also seen many companies raise hundreds of millions of dollars and be in no rush to IPO. So what’s going on? Listen to find out. Vendors mentioned in this podcast: Pure Storage, NetApp, IBM, NGD Systems, Seagate, Western Digital, Tegile, Infinidat, VAST Data, Scality, Cloudian, Rubrik, Cohesity, Veeam, Nutanix, StorONE, Pavilion, DDN, Hammerspace, Storpool, StorageOS, Portworx, HPE, Hitachi, Oracle, Dell EMC, Softiron, MongoDB, Postgres, Databricks, Nebulon. Elapsed Time: 00:43:11 Timeline * 00:00:00 – Intros * 00:03:20 – Storage hardware used to be cool * 00:06:00 – Unicorns and hectacorns! * 00:06:30 – Are companies looking to IPO or be acquired? * 00:07:50 – Margins on storage are thin * 00:11:15 – The Next Wave – Seagate/Western Digital * 00:14:10 – Infinidat – building traditional HDD systems * 00:17:30 – We like Howard! * 00:20:00 – Companies need new product lines * 00:21:00 – Where are object storage companies headed? * 00:23:45 – What about data protection companies?  * 00:26:00 – Secondary data usage means working with the business * 00:32:00 – Some companies will need to merge * 00:35:00 – Buying startup solutions could be risky if the companies are acquired * 00:36:30 – Open source storage could be an attractive fallback * 00:40:00 – Wrap Up Related Podcasts & Blogs * #168 – Storage Unicorns* #130 – Making Money in the Storage Business* #138 – Storage Predictions for 2020 (Part I)* #139 – Storage Predictions for 2020 (Part II) Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #hpno.

 #169 – In-Memory Computing and Apache Ignite | File Type: audio/mpeg | Duration: 45:35

This week Chris and Martin talk to Nikita Ivanov CTO and founder of GridGain Systems. The topic is in-memory computing and specifically Apache Ignite, an open-source key-value store that also supports SQL99 and POSIX-compliant file interfaces. The idea of running applications purely from memory isn’t a new one. DRAM is the fastest “storage” component but isn’t designed as a long-term storage medium. Consequently, in-memory solutions such as Apache Ignite require features to ensure data resiliency and consistency. Ignite and similar solutions have a heavy focus on data distribution and protection in order to meet resiliency needs. We also have to remember that memory and storage use different semantics. Memory is byte-orientated, through LOAD/STORE type instructions, whereas storage operates at a block level through read/write introductions. This difference provides both opportunities and challenges. As Nikita indicates, the new wave of storage-class memory products (persistent memory) such as Optane may seem to offer benefits, but might not offer significant benefit through the addition of persistence. You can learn more about GridGain at https://www.gridgain.com/ and Apache Ignite at https://ignite.apache.org/ Elapsed Time: 00:45:35 Timeline * 00:00:00 – Intros * 00:01:10 – What is Apache Ignite? * 00:02:30 – Effective in-memory computing introduces multiple machines & distributed systems * 00:06:20 – Memory and storage have different access semantics * 00:09:00 – In-memory computing has driven the most advanced distributed systems * 00:10:24 – What data models does Apache Ignite support? * 00:12:00 – Ignite offers SQL99, Key Value and POSIX file system semantics * 00:13:19 – Ignite suits between 8 and 64 nodes * 00:16:00 – Ignite is aimed at high-end in-memory requirements * 00:18:21 – Is in-memory computing a replacement for faster hardware? * 00:22:30 – GPUs offer the ability to manage small-scale analytics * 00:23:50 – How can we differentiate between in-memory solutions? * 00:25:00 – Complexity is a challenge for in-memory computing * 00:27:30 – Do we need to modify in-memory computing to be more consumable? * 00:32:10 – How do we differentiate between the multiple in-memory solutions? * 00:34:00 – How will new media influence in-memory development? * 00:39:00 – The next challenge for non-volatile media is integration * 00:40:30 – Wrap Up Related Podcasts & Blogs * #147 – Introduction to Key Value Stores and Redis* #36 – The Persistence of Memory with Rob Peglar* #159 – Introduction to MRAM with Joe O’Hare from Everspin Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #25c4.

 #168 – Storage Unicorns | File Type: audio/mpeg | Duration: 34:41

This week Chris and Martin review the idea of storage unicorns, companies that have a valuation of one billion dollars or more. What exactly is the basis or justification for a billion dollar price tag? Is this something invented by the VC industry or is there a real degree of science behind the assumptions? The list in question comes from a Blocks & Files article written by Chris Mellor, which in turn references the list produced by an analyst firm. While valuation based on some multiple of money invested does give some indication of value, there are other more tangible metrics available. Unfortunately the industry isn’t keen on sharing revenue and customer numbers other than when legally required to so. Who is missing off the list? We see no mention of Weka, Scality, Cloudian, Wasabi or Backblaze. It seems unlikely these companies don’t have equivalent valuations, so perhaps the building of lists is more arbitrary. You can find the article written by Chris Mellor here – https://blocksandfiles.com/2020/07/15/coldago-storage-unicorns/ – so make your own mind up! Elapsed Time: 00:34:41 Timeline * 00:00:00 – Intros * 00:01:00 – Chris Mellor’s article on Coldago Storage Unicorns * 00:02:40 – Unicorns have valuations greater than $1 billion * 00:04:00 – Datrium acquired by VMware, perhaps $600 million? * 00:06:00 – $1 billion is a lot of money! * 00:07:20 – Valuations can have no relation to the revenue of a company * 00:09:40 – How many storage companies have successfully IPOd in the last 5 years? * 00:11:30 – Where are Weka, Scality, Cloudian? * 00:12:40 – DDN is now a portfolio company (Tintri, Intelliflash, Nexenta acquisitions) * 00:16:30 – Are we over-cynical, or is it just opinion? * 00:19:00 – Companies need to have relevance to the public cloud * 00:20:05 – Party time!  Rubrik & Veeam compete – what will VMworld do this year? * 00:22:45 – Veritas – the oldest unicorn in the world? * 00:25:00 – Do the unicorns need to expand their product portfolios? * 00:27:15 – Will we continue to see storage unicorns? * 00:30:10 – Are we too Western focused?  * 00:31:00 – Who else is missing – Wasabi?  Backblaze? * 00:33:00 – Look out for the Storage Unpacked Storage Unicorn List! * 00:33:40 – Wrap Up Related Podcasts & Blogs * #130 – Making Money in the Storage Business* #110 – Storage Vendor Consolidations & Acquisitions* #64 – Success & Failure in Storage Startup Land Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #p2fg.

 #167 – Adapting Open Source Storage for the Enterprise | File Type: audio/mpeg | Duration: 36:09

This week Chris and Martin catch up with Phil Straw, CEO at Softiron for a discussion on packaging Open Source storage solutions for adoption by the enterprise. Softiron sells a software appliance based on Ceph which has been tuned to deliver high performance with high efficiency. So, exactly how do vendors go about making Open Source more consumable without breaking the values of Open Source software development? This discussion is interesting as it highlights both the benefits of marrying hardware and software, while working within the constraints of a community software solution. Rather than simply test, Softiron look at the code and work out the right way to make Ceph work efficiently. Of course, every enterprise could do the same thing, but typically this task was covered by the vendor. You can find more on Softiron at www.softiron.com. Elapsed Time: 00:36:09 Timeline * 00:00:00 – Intros * 00:01:30 – How can we package Open Source storage software? * 00:02:30 – Is there a gap in open source and enterprise expectations? * 00:06:00 – Where do Open Source storage ideas come from? * 00:07:30 – Softiron brings an “enterprise” experience to Ceph * 00:08:45 – Open source adoption often means just testing * 00:11:40 – Is the enterprise capable of managing open source? * 00:13:00 – How do you adopt a software product with little code control* 00:16:00 – HyperDrive is built to be a storage solution * 00:18:15 – Ceph is not forked* 00:20:00 – How can Softiron ensure Ceph heads the right direction * 00:24:30 – Storage is becoming more important again * 00:28:00 – Lock-in occurs in many ways, not purely technological * 00:29:00 – How can a vendor like Softiron protect their IP? * 00:32:45 – Always ask how you will exit using a product * 00:34:30 – What is HyperDrive? * 00:35:10 – Wrap Up Related Podcasts & Blogs * #41 – Does Open Source Have a Place in Storage?* #46 – Another View on Open Source Storage with Neil Levine* #130 – Making Money in the Storage Business Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #504n.

 #166 – Infinidat Elastic Storage Pricing with Eran Brown (Sponsored) | File Type: audio/mpeg | Duration: 44:28

This week, Chris and Martin are joined by Chris Mellor from Blocks & Files and Eran Brown from Infinidat to discuss flexible pricing strategies in light of current business disruptions. Infinidat has introduced an elastic pricing model for both capex and opex purchases. How is this implemented and what platform features enable Infinidat to deliver this capability? As businesses are disrupted through the coronavirus pandemic, uncertain times means uncertain demands and budgets. Companies may defer spending or want flexible purchasing models for technology deployments. Public cloud is one choice, but this comes with challenges around data repatriation. Infinidat uses low-cost media and proprietary data management techniques to optimise I/O across DRAM, disk and flash. With the majority of data on cheap storage, Infinidat is able to over-provision and offer customers much greater flexibility in purchasing options, including capital and operational expense. Eran takes the team through the challenges, explains some of the issues customers have experienced and then details how Infinidat can offer flexible terms to customers over the lifetime of their relationship. For more information on Elastic Pricing, check out https://www.infinidat.com/en Elapsed Time: 00:44:28 Timeline * 00:00:00 – Intros * 00:01:10 – Uncertain times, uncertain demand and supply * 00:02:30 – Budgets are under constraint * 00:04:30 – The storage industry is in a strong position * 00:06:00 – Is data centre access a challenge? * 00:09:00 – Are uncertainties driving businesses to public cloud? * 00:11:00 – Over-provisioning is one solution to avoid friction of purchase * 00:13:00 – Moving temporarily to the cloud has business implications * 00:17:00 – Private and public cloud purchasing are inverted * 00:19:00 – Flexible pricing allows customers to get best value from budgets * 00:20:00 – Can all-flash vendors afford to put over-provisioned systems onsite? * 00:24:00 – What is Infinidat Elastic Pricing? * 00:29:00 – Deploying to multiple customer sites is a complex process * 00:32:00 – Tiering to cloud is an impractical solution * 00:33:50 – How can AIOps and analytics help predict capacity demand? * 00:37:30 – Minority Report for storage purchases * 00:38:47 – Welcome to the Storage Bank * 00:40:00 – Purchasing and consumption models will be more important than speeds & feeds * 00:43:10 – Wrap Up Transcript Related Podcasts & Blogs * #141 – Building Storage Systems of the Future* #132 – Accelerating Ransomware Recovery with Eran Brown Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #m3km.

 #165 – Homogeneous Data Protection with HYCU (Sponsored) | File Type: audio/mpeg | Duration: 31:08

As we move to a hybrid and multi-cloud world, IT organisations need standardisation in operational processes. This is particularly true for data protection, where consistent policies and compliance are essential. This week, Chris chats to Subbiah Sundaram from HYCU about data protection in public clouds and the ability to bring a consistent reporting model with HYCU Protege. Why is consistency important? Public cloud providers have relatively basic data protection options, that might not align with existing enterprise requirements. As workloads move around, backup needs to be consistently applied, whether that data is on premises or in a platform like Azure. There are three choices; run multiple platforms and hope they can be manually aligned. Alternatively, SaaS offers great centralisation but doesn’t provide local recovery. A third option is to use backup software designed for each platform and a central management/reporting platform. HYCU now offers data protection for on-premises, Azure and GCP workloads. Management reporting is centralised using Protege. Customers can also use Protege for application mobility, moving backups between platforms with consistent restore capabilities across clouds. To learn more about Protege and HYCU for Azure & GCP, check out https://www.hycu.com/ and for Test Drive – find out more in this blog post. You can learn more about HYCU for Azure in this Architecting IT blog post. Elapsed Time: 00:31:08 Timeline * 00:00:00 – Intros * 00:01:00 – What is hybrid and multi-cloud data protection? * 00:04:30 – Moving to cloud needs a rethink of data protection * 00:05:05 – Cloud backup solutions aren’t very mature * 00:06:06 – Portability and multiple copies are still important  * 00:09:00 – Three scenarios – SaaS backup for everything * 00:09:30 – Scenario 2 – do nothing * 00:10:30 – Scenario 3 – centralised management distributed backup * 00:12:10 – Local copies enable fast recovery * 00:13:30 – Application mobility hasn’t really happened * 00:14:20 – Networking is still a major issue in application mobility * 00:19:00 – HYCU for Azure, how does it work? * 00:22:00 – What is Protege? * 00:27:00 – What about Futures – Containers?  AWS? * 00:27:30 – Nutanix Mine and HYCU Test Drive * 00:30:00 – Wrap Up Related Podcasts & Blogs * #73 – HYCU – Data Protection for Hyper-converged Infrastructure* HYCU Announces GA of HYCU for Azure* Data Protection Choices for Heterogeneous Environments Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #vued.

Comments

Login or signup comment.