Screaming in the Cloud show

Screaming in the Cloud

Summary: Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.

Join Now to Subscribe to this Podcast

Podcasts:

 Episode 7: The Exact Opposite of a Job Creator | File Type: audio/mp3 | Duration: 00:35:14

Monitoring in the entire technical world is terrible and continues to be a giant, confusing mess. How do you monitor? Are you monitoring things the wrong way? Why not hire a monitoring consultant!          Today, we’re talking to monitoring consultant Mike Julian, who is the editor of the Monitoring Weekly newsletter and author of O’Reilly’s Practical Monitoring. He is the voice of monitoring. Some of the highlights of the show include: Observability comes from control theory and monitoring is for what we can anticipate Industry’s lack of interest and focus on monitoring When there’s an outage, why doesn’t monitoring catch it?” Unforeseen things. Cost and failure of running tools and systems that are obtuse to monitor Outsource monitoring instead of devoting time, energy, and personnel to it Outsourcing infrastructure means you give up some control; how you monitor and manage systems changes when on the Cloud CloudWatch: Where metrics go to die Distributed and Implemented Tracing: Tracing calls as they move through a system Serverless Functions: Difficulties experienced and techniques to use Warm vs. Cold Start: If a container isn't up and running, it has to set up database connections Monitoring can't fix a bad architecture; it can't fix anything; improve the application architecture Visibility of outages and pain perceived; different services have different availability levels Links: Mike Julian Monitoring Weekly Copy Construct on Twitter Baron Schwartz on Twitter Charity Majors on Twitter Redis Kubernetes Nagios Datadog

 Episode 7: The Exact Opposite of a Job Creator | File Type: audio/mp3 | Duration: 00:35:14

Monitoring in the entire technical world is terrible and continues to be a giant, confusing mess. How do you monitor? Are you monitoring things the wrong way? Why not hire a monitoring consultant!          Today, we’re talking to monitoring consultant Mike Julian, who is the editor of the Monitoring Weekly newsletter and author of O’Reilly’s Practical Monitoring. He is the voice of monitoring. Some of the highlights of the show include: Observability comes from control theory and monitoring is for what we can anticipate Industry’s lack of interest and focus on monitoring When there’s an outage, why doesn’t monitoring catch it?” Unforeseen things. Cost and failure of running tools and systems that are obtuse to monitor Outsource monitoring instead of devoting time, energy, and personnel to it Outsourcing infrastructure means you give up some control; how you monitor and manage systems changes when on the Cloud CloudWatch: Where metrics go to die Distributed and Implemented Tracing: Tracing calls as they move through a system Serverless Functions: Difficulties experienced and techniques to use Warm vs. Cold Start: If a container isn't up and running, it has to set up database connections Monitoring can't fix a bad architecture; it can't fix anything; improve the application architecture Visibility of outages and pain perceived; different services have different availability levels Links: Mike Julian Monitoring Weekly Copy Construct on Twitter Baron Schwartz on Twitter Charity Majors on Twitter Redis Kubernetes Nagios Datadog New Relic Sumo Logic Prometheus Honeycomb Honeycomb Blog CloudWatch Zipkin X-Ray Lambda DynamoDB Pinboard Slack Digital Ocean

 Episode 6: The Robot Uprising Will Have Very Clean Floors | File Type: audio/mp3 | Duration: 00:40:02

How many of you are considered heroes? Specifically, in the serverless Cloud, Twitter, and Amazon Web Services (AWS) communities? Well, Ben Kehoe is a hero. Ben is a Cloud robotics research scientist who makes serverless Roombas at iRobot. He was named an AWS Community Hero for his contributions that help expand the understanding, expertise, and engagement of people using AWS. Some of the highlights of the show include: Ben’s path to becoming a vacuum salesman History of Roomba and how AWS helps deliver current features Roombas use AWS Internet of Things (IoT) for communication between the Cloud and robot Boston is shaping up to be the birthplace of the robot overlords of the future AWS IoT is serverless and features a number of pieces in one service Robot rising of clean floors AWS Greengrass, which deploys runtimes and manages connections for communication, should not be ignored Creating robots that will make money and work well Roomba’s autonomy to serve the customer and meet expectations Robots with Cloud and network connections Competitive Cloud providers were available, but AWS was the clear winner Serverless approach and advantages for the intelligent vacuum cleaner Future use of higher-level machine learning tools Common concern of lock-in with AWS Changing landscape of data governance and multi-Cloud Preparing for migrations that don’t happen or change the world Data gravity and saving vs. spending money Links: Ben Kehoe on YouTube AWS AWS Community Hero AWS IoT Ben Kehoe on Twitter iRobot AWS Greengrass

 Episode 6: The Robot Uprising Will Have Very Clean Floors | File Type: audio/mp3 | Duration: 00:40:02

How many of you are considered heroes? Specifically, in the serverless Cloud, Twitter, and Amazon Web Services (AWS) communities? Well, Ben Kehoe is a hero. Ben is a Cloud robotics research scientist who makes serverless Roombas at iRobot. He was named an AWS Community Hero for his contributions that help expand the understanding, expertise, and engagement of people using AWS. Some of the highlights of the show include: Ben’s path to becoming a vacuum salesman History of Roomba and how AWS helps deliver current features Roombas use AWS Internet of Things (IoT) for communication between the Cloud and robot Boston is shaping up to be the birthplace of the robot overlords of the future AWS IoT is serverless and features a number of pieces in one service Robot rising of clean floors AWS Greengrass, which deploys runtimes and manages connections for communication, should not be ignored Creating robots that will make money and work well Roomba’s autonomy to serve the customer and meet expectations Robots with Cloud and network connections Competitive Cloud providers were available, but AWS was the clear winner Serverless approach and advantages for the intelligent vacuum cleaner Future use of higher-level machine learning tools Common concern of lock-in with AWS Changing landscape of data governance and multi-Cloud Preparing for migrations that don’t happen or change the world Data gravity and saving vs. spending money Links: Ben Kehoe on YouTube AWS AWS Community Hero AWS IoT Ben Kehoe on Twitter iRobot AWS Greengrass Shark Cat Medium Boston Dynamics AWS Lambda AWS SageMaker AWS Kinesis Google Cloud Platform Spanner Kubernetes Digital Ocean

 Episode 5: The Last Mainframe with a Kickstart and a Double Clutch | File Type: audio/mp3 | Duration: 00:34:39

How are companies evolving in a world where Cloud is on the rise? Where Cloud providers are bought out and absorbed into other companies? Today, we’re talking to Nell Shamrell-Harrington about Cloud infrastructure. She is a senior software engineer at Chef, CTO at Operation Code, and core maintainer of the the Habitat open source product. Nell has traveled the world to talk about Chef, Ruby, Rails, Rust, DevOps, and Regular Expressions. Some of the highlights of the show include: Chef is a configuration management tool that handles instance, files, virtual machine container, and other items. Immutable infrastructure has emerged as the best of practice approach. Chef is moving into next gen through various projects, including one called, Compliance - a scanning tool. Some people don’t trust virtualization. Habitat is an open source project featuring software that allows you to use a universal packaging format. Habitat is a run-time, so when you run a package on multiple virtual machines, they form a supervisor ring to communicate via leader/follower roles. Deploying an application depends on several factors, including application and infrastructure needs. It is possible to convert old systems with old deployment models to Habitat. Habitat allows you to lift a legacy application and put it into that modern infrastructure without needing to rewrite the application. You can ease in packages to Habitat, and then have Habitat manage pieces of the application. Habitat is Cloud-agnostic and integrates with public and private Cloud providers by exporting an application as a container. Chef is one of just a few third-party offerings marketed directly by AWS. From inception to deployment, there is a place for large Cloud providers to parlay into language they already speak. Operation Code is a non-profit that teaches software engineer skills to veterans. It helps veterans transition into high-paying engineering jobs. The technology landscape is ever changing. What skills are most marketable?   Operation Code is a learning by experience type of organization and usually starts people on the front-end to immediately see results. Links: Nell Shamrell-Harrington Nell Shamrell-Harrington on Twitter Nell Shamrell-Harrington on GitHub Operation Code Chef Ruby on Rails Rust Regular Expressions Habitat AWS Kubernetes Docker LinkedIn Learning GorillaStack (use discount code: screaming)

 Episode 5: The Last Mainframe with a Kickstart and a Double Clutch | File Type: audio/mp3 | Duration: 00:34:39

How are companies evolving in a world where Cloud is on the rise? Where Cloud providers are bought out and absorbed into other companies? Today, we’re talking to Nell Shamrell-Harrington about Cloud infrastructure. She is a senior software engineer at Chef, CTO at Operation Code, and core maintainer of the the Habitat open source product. Nell has traveled the world to talk about Chef, Ruby, Rails, Rust, DevOps, and Regular Expressions. Some of the highlights of the show include: Chef is a configuration management tool that handles instance, files, virtual machine container, and other items. Immutable infrastructure has emerged as the best of practice approach. Chef is moving into next gen through various projects, including one called, Compliance - a scanning tool. Some people don’t trust virtualization. Habitat is an open source project featuring software that allows you to use a universal packaging format. Habitat is a run-time, so when you run a package on multiple virtual machines, they form a supervisor ring to communicate via leader/follower roles. Deploying an application depends on several factors, including application and infrastructure needs. It is possible to convert old systems with old deployment models to Habitat. Habitat allows you to lift a legacy application and put it into that modern infrastructure without needing to rewrite the application. You can ease in packages to Habitat, and then have Habitat manage pieces of the application. Habitat is Cloud-agnostic and integrates with public and private Cloud providers by exporting an application as a container. Chef is one of just a few third-party offerings marketed directly by AWS. From inception to deployment, there is a place for large Cloud providers to parlay into language they already speak. Operation Code is a non-profit that teaches software engineer skills to veterans. It helps veterans transition into high-paying engineering jobs. The technology landscape is ever changing. What skills are most marketable?   Operation Code is a learning by experience type of organization and usually starts people on the front-end to immediately see results. Links: Nell Shamrell-Harrington Nell Shamrell-Harrington on Twitter Nell Shamrell-Harrington on

 Episode 4: It's a Data Lake, not a Data Public Swimming Pool | File Type: audio/mp3 | Duration: 00:34:29

Open source activism tends to focus on running on hardware you can trust and avoiding Cloud computing. The problem with some Cloud providers has to do with a conflict of interest between serving customers and how they generate revenue. It’s important for the customer to have control of their computer and their data in the Cloud. But what about their security and privacy?Today, we’re talking to Kyle Rankin, chief security officer at Purism and writer for Linux Journal. He is a Linux expert who decided to work at Purism because of the company’s belief in free software and the Linux community.Some of the highlights of the show include: Cloud providers have faced challenges when it comes to data privacy and who owns what. The word “Cloud” is overloaded, and it is unclear who is in control. Cloud providers can sabotage efforts to make programs work together. Cloud providers may not troll through data and exploit it. Yet, they develop tools for customers to be able to do that.   Even though Linux Journal stopped being printed and went digital, and was going under, it’s now back and taking a new approach. What matters to new readers and Linux users is now different than what was important to original readers. The more time you can spend to understand what’s happening behind the scenes will make you much more marketable and adaptable. Kyle explains whether Amazon Linux is becoming a viable concern and if distribution matters anymore. Now, it’s about running an application, not thinking about what it’s running on. Are there gangs of Cloud users? Do people look down on Azure users? The target is always moving and changing.   Check out Kyle’s book, Linux Hardening in Hostile Networks: Server Security from TLS to Tor. Links: Kyle Rankin on Twitter Purism Kyle Rankin’s book - Linux Hardening in Hostile Networks: Server Security from TLS to Tor Linux Journal 2.0 FAQ GorillaStack (use “screaming” for discount)

 Episode 4: It's a Data Lake, not a Data Public Swimming Pool | File Type: audio/mp3 | Duration: 00:34:29

Open source activism tends to focus on running on hardware you can trust and avoiding Cloud computing. The problem with some Cloud providers has to do with a conflict of interest between serving customers and how they generate revenue. It’s important for the customer to have control of their computer and their data in the Cloud. But what about their security and privacy?Today, we’re talking to Kyle Rankin, chief security officer at Purism and writer for Linux Journal. He is a Linux expert who decided to work at Purism because of the company’s belief in free software and the Linux community.Some of the highlights of the show include: Cloud providers have faced challenges when it comes to data privacy and who owns what. The word “Cloud” is overloaded, and it is unclear who is in control. Cloud providers can sabotage efforts to make programs work together. Cloud providers may not troll through data and exploit it. Yet, they develop tools for customers to be able to do that.   Even though Linux Journal stopped being printed and went digital, and was going under, it’s now back and taking a new approach. What matters to new readers and Linux users is now different than what was important to original readers. The more time you can spend to understand what’s happening behind the scenes will make you much more marketable and adaptable. Kyle explains whether Amazon Linux is becoming a viable concern and if distribution matters anymore. Now, it’s about running an application, not thinking about what it’s running on. Are there gangs of Cloud users? Do people look down on Azure users? The target is always moving and changing.   Check out Kyle’s book, Linux Hardening in Hostile Networks: Server Security from TLS to Tor. Links: Kyle Rankin on Twitter Purism Kyle Rankin’s book - Linux Hardening in Hostile Networks: Server Security from TLS to Tor Linux Journal 2.0 FAQ

 Episode 3: Turning Off Someone Else's Site as a Service | File Type: audio/mp3 | Duration: 00:34:38

How do you encourage businesses to pick Google Cloud over Amazon and other providers? How do you advocate for selecting Google Cloud to be successful on that platform? Google Cloud is not just a toy with fun features, but is a a capable Cloud service. Today, we’re talking to Seth Vargo, a Senior Staff Developer Advocate at Google. Previously, he worked at HashiCorp in a similar advocacy role and worked very closely with Terraform, Vault, Consul, Nomad, and other tools. He left HashiCorp to join Google Cloud and talk about those tools and his experiences with Chef and Puppet, as well as communities surrounding them. He wants to share with you how to use these tools to integrate with Google Cloud and help drive product direction. Some of the highlights of the show include: Strengths related to Google Cloud include its billing aspect. You can work on Cloud bills and terminate all billable resources. The button you click in the user interface to disable billing across an entire project and delete all billable resources has an API. You can build a chat bot or script, too. It presents anything you’ve done in the Consul by clicking and pointing, as well as gives you what that looks like in code form. You can expose that from other people’s accounts because turning off someone else’s Website as a service can be beneficial. You can invite anyone with a Google account, not just ‘@gmail.com’ but ‘@’ any domain and give them admin or editor permissions across a project. They’re effectively part of your organization within the scope of that project. For example, this feature is useful for training or if a consultant needs to see all of your different clients in one dashboard, but your clients can’t see each other. Google is a household name. However, it’s important to recognize that advocacy is not just external advocacy, there’s an internal component to it. There’s many parts of Google and many features of Google Cloud that people aren’t aware of. As an advocate, Seth’s job is to help people win. Besides showing people how they can be successful on Google Cloud, Seth focuses on strategic complaining. He is deeply ingrained in several DevOps and configuration management communities, which provide him with positive and negative feedback. It’s his job to take that feedback and convert it into meaningful action items for product teams to prioritize and put on roadmaps. Then, the voice of the communities are echoed in the features and products being internally developed. Amazon has been in the Cloud business for a long time. What took Google so long? For a long time, Google was perceived as being late to the party and not able to offer as comprehensive and experienced services as Amazon. Now, people view Google Cloud as not being substandard, but not where serious business happens. It’s a fully feature platform and it comes down to preferences and pre-existing features, not capability. Small and mid-size companies typically pick a Cloud provider and stick with their choice. Larger companies and enterprises, such as Fortune 50 and Fortune 500 companies, pick multiple Clouds. This is usually due to some type of legal compliance issues, or there are Cloud providers that have specific features. Externally at Google, there is the Deployment Manager tool at cloud.google.com. It’s the equivalent of CloudFormation, and teams at Google are staffed full time to perform engineering work on it. Every API that you get by clicking a button on cloud.google.com are viewing the API Docs accessible via the Deployment Manager. Google Cloud also partners with open source tools and corresponding companies. There are people at Google who are paid by Google who work full time on open source tools, like Terraform, Chef, and Puppet. This allows you to provision Google Cloud resources using the tools that you prefer. According to Seth, there’s five key pillars of DevOps: 1) Reduce organizational silos and break down barriers between teams; 2) Accept failures; 3) Implement gradual change; 4) Tooling and automation; and 5) Measure everything. Think of DevOps as an interface in programming language, like Java, or a type of language where it doesn’t actually define what you do, but gives you a high level of what the function is supposed to implement. With the SRE discipline, there’s a prescribed way for performing those five pillars of DevOps. Specific tools and technologies used within Google, some of which are exposed publicly as part of Google Cloud, enable the kind of DevOps culture and DevOps mindset that occur. A reason why Google offers abstract classes in programming is that there’s more than one way to solve a problem, and SRE is just one of those ways. It’s the way that has worked best for Google, and it has worked best for a number of customers that Google is working with. But there are some other ways, too. Google supports those ways and recognizes that there isn’t just one path to operational success, but many ways to reach that prosperity. The book, Site Reliability Engineering, describes how Google does SRE, which tried to be evangelized with the world because it can help people improve  operations. The flip side of that is that organizations need to be cognizant of their own requirements. Google has always held up along several other companies as a shining beacon of how infrastructure management could be. But some say there’s still problems with its infrastructure, even after 20-some years and billions invested. Every company has problems, some of them technical, some cultural. Google is no exception. The one key difference is the way Google handles issues from a cultural perspective. It focuses on fixing the problem and making sure it doesn’t happen again. There’s a very blameless culture. Conferences tend to include a lot of hand waving and storytelling. But as an industry, more war stories need to be told instead of pleasure stories. Conference organizers want to see sunshine and rainbows because that sells tickets and makes people happy. The systemic problem is how to talk about problems out in the open. Becoming frustrated and trying to figure out why computers do certain things is a key component of the SRE discipline referred to as Toil -  work tied to systems that either we don’t understand or don’t make sense to automate. Those going to Google Cloud to ‘move and improve’ tend to be a mix of those from other Cloud providers and those from on-premise data center deployments. Move and improve is where there are VMs in a data center, and they need to be moved to the Cloud. There are tiny differences around the Cloud-native paradigm and providers. There’s some key pillars: Does it handle restarts well? Is it highly available? Can it be containerized, even though containers aren’t necessarily required for Cloud native? Does it package all of its dependencies with it? Can it run on different operating systems? All of these things are generic, they’re not specific to a Cloud provider. Links: Google Cloud and blog Amazon Web Services HashiCorp Terraform Vault Consul Nomad Chef Puppet Kubernetes AutoML Monitorama Azure CloudFormation Ansible Elk Stack Site Reliability Engineering book for O’Reilly Fastly Hacker News Cloud Foundry Microsoft Cloud Alibaba Cloud Lambda Quotes by Seth: “Everything we do on Google Cloud is API First. Anytime you click a button in that Web UI, there is a corresponding API call, which means you can build automation, compliance, and testing around these various aspects.” “The IAM and permission management in Google Cloud is incredibly powerful. It leverages the same IAM permissions that G Suite has which is hosted Gmail, Calendar, and all of those other things.” “How do I get people who want to use Google Cloud or don’t know about Google Cloud? The ability to be successful on the platform.” “I would definitely say that any company you work at, whether the recruiter tells you that it’s all sunshine and rainbows and there’s nothing ever wrong is a lie.”

 Episode 3: Turning Off Someone Else's Site as a Service | File Type: audio/mp3 | Duration: 00:34:38

How do you encourage businesses to pick Google Cloud over Amazon and other providers? How do you advocate for selecting Google Cloud to be successful on that platform? Google Cloud is not just a toy with fun features, but is a a capable Cloud service. Today, we’re talking to Seth Vargo, a Senior Staff Developer Advocate at Google. Previously, he worked at HashiCorp in a similar advocacy role and worked very closely with Terraform, Vault, Consul, Nomad, and other tools. He left HashiCorp to join Google Cloud and talk about those tools and his experiences with Chef and Puppet, as well as communities surrounding them. He wants to share with you how to use these tools to integrate with Google Cloud and help drive product direction. Some of the highlights of the show include: Strengths related to Google Cloud include its billing aspect. You can work on Cloud bills and terminate all billable resources. The button you click in the user interface to disable billing across an entire project and delete all billable resources has an API. You can build a chat bot or script, too. It presents anything you’ve done in the Consul by clicking and pointing, as well as gives you what that looks like in code form. You can expose that from other people’s accounts because turning off someone else’s Website as a service can be beneficial. You can invite anyone with a Google account, not just ‘@gmail.com’ but ‘@’ any domain and give them admin or editor permissions across a project. They’re effectively part of your organization within the scope of that project. For example, this feature is useful for training or if a consultant needs to see all of your different clients in one dashboard, but your clients can’t see each other. Google is a household name. However, it’s important to recognize that advocacy is not just external advocacy, there’s an internal component to it. There’s many parts of Google and many features of Google Cloud that people aren’t aware of. As an advocate, Seth’s job is to help people win. Besides showing people how they can be successful on Google Cloud, Seth focuses on strategic complaining. He is deeply ingrained in several DevOps and configuration management communities, which provide him with positive and negative feedback. It’s his job to take that feedback and convert it into meaningful action items for product teams to prioritize and put on roadmaps. Then, the voice of the communities are echoed in the features and products being internally developed. Amazon has been in the Cloud business for a long time. What took Google so long? For a long time, Google was perceived as being late to the party and not able to offer as comprehensive and experienced services as Amazon. Now, people view Google Cloud as not being substandard, but not where serious business happens. It’s a fully feature platform and it comes down to preferences and pre-existing features, not capability. Small and mid-size companies typically pick a Cloud provider and stick with their choice. Larger companies and enterprises, such as Fortune 50 and Fortune 500 companies, pick multiple Clouds. This is usually due to some type of legal compliance issues, or there are Cloud providers that have specific features. Ex

 Episode 2: Shoving a SAN into us-east-1 | File Type: audio/mp3 | Duration: 00:35:09

When companies migrate to the Cloud, they are literally changing how they do everything in their IT department. If lots of customers exclusively rely on a service, like us-east-1, then they are directly impacted by outages. There is safety in a herd and in numbers because everybody sits there, down and out. But, you don’t engineer your application to be a little more less than a single point of failure. It’s a bad idea to use a sole backing service for something, and it’s unacceptable from a business perspective. Today, we’re talking to Chris Short from the Cloud and DevOps space. Recently, he was recognized for his DevOps’ish newsletter and won the Opensource.com People’s Choice Award for his DevOps writing. He’s been blogging for years and writing about things that he does every day, such as tutorials, codes, and methods. Now, Chris, along with Jason Hibbets, run the DevOps team for Opensource.com Some of the highlights of the show include: Chris’ writing makes difficult topics understandable. He is frank and provides broad information. However, he admits when he is not sure about something. SJ Technologies aims to help companies embrace a DevOps philosophy, while adapting their operations to a Cloud-native world. Companies want to take advantage of philosophies and tooling around being Cloud native. Many companies consider a Cloud migration because they’ve got data centers across the globe. It’s active-passive backup with two data centers that are treated differently and cannot switch to easily. Some companies do a Cloud migration to refactor and save money. A Cloud migration can result in you having to shove your SAN into the USC1. It can become a hybrid workflow. Lift and shift is often considered the first legitimate step toward moving to the Cloud. However, know as much as you can about your applications and RAM and CPU allowances. Look at density when you’re lifting and shifting. Know how your applications work and work together. Simplify a migration by knowing what size and instances to use and what monitoring to have in place. Some do not support being on the Cloud due to a lack of understanding of business practices and how they are applied. But, most are no longer skeptical about moving to the Cloud. Now, instead of ‘why cloud,’ it becomes ‘why not.’ Don’t jump without looking. Planning phases are important, but there will be unknowns that you will have to face. Downtime does cost money. Customers will go to other sites. They can find what they want and need somewhere else. There’s no longer a sole source of anything. The DevOps journey is never finished, and you’re never done migrating. Embrace changes yourself to help organizations change. Links: Chris Short on Twitter DevOps'ish SJ Technologies Amazon Web Services Cloud Native Infrastructure Oracle OpenShift Puppet Kubernetes Simon Wardley Rackspace The Mythical Man-Month Atlassian BuzzFeed Quotes by Chris: “Let’s not say that they’re going whole hog Cloud Native or whole hog cloud for that matter but they wanna utilize some things.” “They can never switch from one to the other very easily, but they want to be able to do that in the Cloud and you end up biting off a lot more than you can chew…” “Create them in AWS. Go. They gladly slurp in all your VM where instances you can create a mapping of this sized thing to that sized thing and off you go. But it’s a good strategy to just get there.” “We have to get better as technologists in making changes and helping people embrace change.”

 Episode 2: Shoving a SAN into us-east-1 | File Type: audio/mp3 | Duration: 00:35:09

When companies migrate to the Cloud, they are literally changing how they do everything in their IT department. If lots of customers exclusively rely on a service, like us-east-1, then they are directly impacted by outages. There is safety in a herd and in numbers because everybody sits there, down and out. But, you don’t engineer your application to be a little more less than a single point of failure. It’s a bad idea to use a sole backing service for something, and it’s unacceptable from a business perspective. Today, we’re talking to Chris Short from the Cloud and DevOps space. Recently, he was recognized for his DevOps’ish newsletter and won the Opensource.com People’s Choice Award for his DevOps writing. He’s been blogging for years and writing about things that he does every day, such as tutorials, codes, and methods. Now, Chris, along with Jason Hibbets, run the DevOps team for Opensource.com Some of the highlights of the show include: Chris’ writing makes difficult topics understandable. He is frank and provides broad information. However, he admits when he is not sure about something. SJ Technologies aims to help companies embrace a DevOps philosophy, while adapting their operations to a Cloud-native world. Companies want to take advantage of philosophies and tooling around being Cloud native. Many companies consider a Cloud migration because they’ve got data centers across the globe. It’s active-passive backup with two data centers that are treated differently and cannot switch to easily. Some companies do a Cloud migration to refactor and save money. A Cloud migration can result in you having to shove your SAN into the USC1. It can become a hybrid workflow. Lift and shift is often considered the first legitimate step toward moving to the Cloud. However, know as much as you can about your applications and RAM and CPU allowances. Look at density when you’re lifting and shifting. Know how your applications work and work together. Simplify a migration by knowing what size and instances to use and what monitoring to have in place. Some do not support being on the Cloud due to a lack of understanding of business practices and how they are applied. But, most are no longer skeptical about moving to the Cloud. Now, instead of ‘why cloud,’ it becomes ‘why not.’ Don’t jump without looking. Planning phases are important, but there will be unknowns that you will have to face. Downtime does cost money. Customers will go to other sites. They can find what they want and need somewhere else. There’s no longer a sole source of anything. The DevOps journey is never finished, and you’re never done migrating. Embrace changes yourself to help organizations change. Links: Chris Short on Twitter

 Episode 1: Feature Flags with Heidi Waterhouse of LaunchDarkly | File Type: audio/mp3 | Duration: 00:28:41

This podcast features people doing interesting work in the world of Cloud. What is the state of the technical world? Let’s first focus on the up or down, on or off function of feature flags. Today, we’re talking to Heidi Waterhouse, a technical writer turned Developer Advocate at LaunchDarkly, which is a feature flag service - a way to wrap a snippet of code around your feature and make it into an instrument to turn on or off. It lets you turn things on and off in your codebase quickly without having to do several commits. However, it is difficult to track it when there are more than about a dozen flags. So, LaunchDarkly provides a way to manage your features at scale with a usable interface and API. Some of the highlights of the show include: A feature flag allows you to hide items before you want them to go live on your Website. You hide it behind a feature flag, doing all the work ahead of time. Then, at some point, you turn it all on instantly without the risk of pushing untested code into your production. You can test at scale to gain authentic data. Test something with your team, your company’s employees, your customers, etc. However, no matter how good your integration tests are, there’s always wobbles to watch for in the system. With implementation, there are a few paths that can work, such as the massive reorganization path. Or, you can just start incrementally with feature flags for new features. LaunchDarkly thinks in the Cloud as the surface because it mostly works with people who are doing Web-based delivery of features. Major companies, like Google and Facebook, offer services similar to feature flags for their own development. They’re operating on such a giant scale that they have internal teams doing it. Companies use feature flags on the front-end and other purposes. It works through the whole stack from frontend page delivery, pricing tiers, white labeling, style sheets, to safer deployments. Do not focus on documentation. You should not have to read documentation for anything that you don’t own. Every feature should have documentation tied to its code. Create a customized experience. Feature flags effectively manage and minimize risk. There is always risk in the world, but what causes disaster is not just one failure. It is a multiplication of failures. This goes wrong and that goes wrong. Feature flagging breaks monolithic releases into tiny chunks that can go forward or backward. LaunchDarkly holds monthly meet-ups called, Test and Production. People share their use case regarding continuous integration, continuous deployment, DevOps, etc. Links: LaunchDarkly iPad Autodesk Slack IBM Quotes by Heidi: “What feature flags do is make it possible for you to push out a deployment with things hidden, we call it launching darkly.” “We’re all about avoiding risk, I think this is our motto this year, eliminate risk…you can’t eliminate risk, but you can make it much less risky.” “Go ahead and write your feature. You know that it’s hidden behind the magical feature flying curtain until you’re ready to turn it on.” “If 20 years of technical writing taught me anything, it’s that nobody wants to be reading documentation.”  

 Episode 1: Feature Flags with Heidi Waterhouse of LaunchDarkly | File Type: audio/mp3 | Duration: 00:28:41

This podcast features people doing interesting work in the world of Cloud. What is the state of the technical world? Let’s first focus on the up or down, on or off function of feature flags. Today, we’re talking to Heidi Waterhouse, a technical writer turned Developer Advocate at LaunchDarkly, which is a feature flag service - a way to wrap a snippet of code around your feature and make it into an instrument to turn on or off. It lets you turn things on and off in your codebase quickly without having to do several commits. However, it is difficult to track it when there are more than about a dozen flags. So, LaunchDarkly provides a way to manage your features at scale with a usable interface and API. Some of the highlights of the show include: A feature flag allows you to hide items before you want them to go live on your Website. You hide it behind a feature flag, doing all the work ahead of time. Then, at some point, you turn it all on instantly without the risk of pushing untested code into your production. You can test at scale to gain authentic data. Test something with your team, your company’s employees, your customers, etc. However, no matter how good your integration tests are, there’s always wobbles to watch for in the system. With implementation, there are a few paths that can work, such as the massive reorganization path. Or, you can just start incrementally with feature flags for new features. LaunchDarkly thinks in the Cloud as the surface because it mostly works with people who are doing Web-based delivery of features. Major companies, like Google and Facebook, offer services similar to feature flags for their own development. They’re operating on such a giant scale that they have internal teams doing it. Companies use feature flags on the front-end and other purposes. It works through the whole stack from frontend page delivery, pricing tiers, white labeling, style sheets, to safer deployments. Do not focus on documentation. You should not have to read documentation for anything that you don’t own. Every feature should have documentation tied to its code. Create a customized experience. Feature flags effectively manage and minimize risk. There is always risk in the world, but what causes disaster is not just one failure. It is a multiplication of failures. This goes wrong and that goes wrong. Feature flagging breaks monolithic releases into tiny chunks that can go forward or backward. LaunchDarkly holds monthly meet-ups called, Test and Production. People share their use case regarding continuous integration, continuous deployment, DevOps, etc. Links: LaunchDarkly iPad Autodesk

Comments

Login or signup comment.