The InfoQ Podcast show

The InfoQ Podcast

Summary: Software engineers, architects and team leads have found inspiration to drive change and innovation in their team by listening to the weekly InfoQ Podcast. They have received essential information that helped them validate their software development map. We have achieved that by interviewing some of the top CTOs, engineers and technology directors from companies like Uber, Netflix and more. Over 1,200,000 downloads in the last 3 years.

Join Now to Subscribe to this Podcast

Podcasts:

 Michelle Krejci on Moving to Microservices: Visualising Technical Debt, Kubernetes, and GraphQL | File Type: audio/mpeg | Duration: 00:34:05

In this podcast, Daniel Bryant spoke to Michelle Krejci, service engineer lead at Pantheon, about the Drupal and Wordpress webops-based company’s move to a microservices architecture. Michelle is a well-known conference speaker in the space of technical leadership and continuous integration, and she shared her lessons learned over the past four years of the migration. Why listen to this podcast: - The backend for the Pantheon webops platform began as a Python-based monolith with a Cassandra data store. This architecture choice initially enabled rapid feature development as the company searched for product/market fit. However, as the company found success and began scaling their engineering teams, the ability to add new functionality rapidly to the monolith became challenging. - Conceptual debt and technical debt greatly impact the ability to add new features to an application. Moving to microservices does not eliminate either of these forms of debt, but use of this architectural pattern can make it easier to identify and manage the debt, for example by creating well-defined APIs and boundaries between modules. - Technical debt -- and the associated engineering toil -- is real debt, with a dollar value, and should be tracked and made visible to everyone. Establishing “quick wins” during the early stages of the migration towards microservices was essential. Building new business-focused services using asynchronous “fire and forget” event-driven integrations with the monolith helped greatly with this goal. - Using containers and Kubernetes provided the foundations for rapidly deploying, releasing, and rolling back new versions of a service. Running multiple Kubernetes namespaces also allowed engineers to clone the production namespace and environment (without data) and perform development and testing within an individually owned sandboxed namespace. - Using the Apollo GraphQL platform allowed schema-first development. Frontend and backend teams collaborated on creating a GraphQL schema, and then individually built their respective services using this as a contract. Using GraphQL also allowed easy mocking during development. Creating backward compatible schema allowed the deployment and release of functionality to be decoupled.

 Ryan Kitchens on Learning from Incidents at Netflix, the Role of SRE, and Sociotechnical Systems | File Type: audio/mpeg | Duration: 00:28:54

In today’s podcast we sit down with Ryan Kitchens, a senior site reliability engineer and member of the CORE team at Netflix. This team is responsible for the entire lifecycle of incident management at Netflix, from incident response to memorialising an issue. Why listen to this podcast: - Top level metrics can be used as a proxy for user experience, and can be used to determine that issue should be alerted on an investigated. For example, at Netflix if the customer playback initiation “streams per second” metric declines rapidly, this may be an indication that something has broken. - Focusing on how things go right can provide valuable insight into the resilience within your system e.g. what are people doing everyday that helps us overcome incidents. Finding sources of resilience is somewhat “the story of the incident you didn’t have”. - When conducting an incident postmortem, simply reconstructing an incident is often not sufficient to determine what needs to be fixed; there is no root cause with complex socio-technical systems as found at Netflix and most modern web-based organisations. Instead, teams must dig a little deeper, and look for what went well, what contributed to the problem, and where are the recurring patterns. - Resilience engineering is a multidisciplinary field that was established in the early 2000s, and the associated community that has emerged is both academic and deeply practical. Although much resilience engineering focuses on domains such as aviation, surgery and military agencies, there is much overlap with the domain of software engineering. - Make sure that support staff within an organisation have a feedback loop into the product team, as these people providing support often know where all of the hidden problems are, the nuances of the systems, and the workarounds. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2LLwk8T You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2LLwk8T

 Oliver Gould on the Three Pillars of Service Mesh, SMI, and Making Technology Bets | File Type: audio/mpeg | Duration: 00:25:08

In this podcast we sit down with Oliver Gould, co-founder and CTO of Buoyant. Oliver has a strong background in networking, architecture and observability, and worked on solving associated technical challenges at both Yahoo! and Twitter. Oliver is a regular presenter at cloud and infrastructure conferences, and alongside his co-founder William Morgan, you can often find them in the hallway track, waxing lyrical about service mesh -- a term they practically coined -- and trying to bring others along on the journey. Service mesh technology is still young, and the ecosystem is still very much a work in progress, but there have been several recent interesting developments within this space. One of these was the announcement of the service mesh interface (SMI) at the recent KubeCon EU in Barcelona. The SMI spec seeks to unlock service mesh integrators and implementers, as this can provide an abstraction that removes the need to bet on any single service mesh implementation. This can be good for both tool makers and enterprise early adopters. Many organisations like Microsoft and HashiCorp are involved with working alongside the community to help define the SMI, including Buoyant. In this podcast we summarise the evolution of the service mesh concept, with a focus on the three pillars: visibility, security, and reliability. We explore the new traffic “tap” feature within Linkerd that allows near real time in-situ querying of metrics, and discuss how to implement network security by leveraging the primitives like Service Account provided by Kubernetes. We also discuss how reliability features, such as retries, time outs, and circuit-breakers are becoming table stakes for infrastructure platforms. We also cover the evolution of the service mesh interface, explore how service meses may impact development and platforms in the future, and briefly discuss some of the benefits offered by the Rust language in relation to building a data plane for Linkerd. We conclude the podcast with a discussion of the importance of community building. Why listen to this podcast: - A well-implemented service mesh can make a distributed software system more observable. Linkerd 2.0 supports both the emitting of mesh telemetry for offline analysis, and also the ability to “tap” communications and make queries dynamically against the data. The Linkerd UI currently makes use the tap functionality. - Linkerd aims to make the implementation of secure service-to-service communication easy, and it does this by leveraging existing Kubernetes primitives. For example, Service Accounts are used to bootstrap the notion of identity, which in turn is used as a basis for Linkerd’s mTLS implementation. - Offering reliability is “table stakes” for any service mesh. A service mesh should make it easy for platform owners to offer fundamental service-to-service communication reliability to application owners. - The future of software development platforms may move (back) to more PaaS-like offerings. Kubernetes-based function as a service (FaaS) frameworks like OpenFaaS and Knative are providing interesting features in this space. A service mesh may provide some of the glue for this type of platform. - Working on the service mesh interface (SMI) specification allowed the Buoyant team to sit down with other community members like HashiCorp and Microsoft, and share ideas and identify commonality between existing service mesh implementations. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2m5DSJ6 You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2m5DSJ6

 Event Sourcing: Bernd Rücker on Architecting for Scale | File Type: audio/mpeg | Duration: 00:25:07

Today on the podcast, Bernd Rucker of Camunda talks about event sourcing. In particular, Wes and Bernd discuss thoughts around scalability, events, commands, consensus, and the orchestration engines Camunda implemented. This podcast is a primer on considerations between an RDBMS and event-driven systems. Why listen to this podcast: - An event-driven system is a more modern approach to building highly scalable systems. - An RDBMS system can limit throughput in scalability. Camunda was able to achieve higher levels of scale by implementing an event-driven system. - Command and events are often confused. Commands are actions that request something to happen. Events describe something that happened. Confusing the two causes confusion in application development of event-driven systems.

 Pat Kua on Technical Leadership, Cultivating Culture, and Career Growth | File Type: audio/mpeg | Duration: 00:26:34

In this podcast we discuss a holistic approach to technical leadership, and Pat provides guidance on everything from defining target operating models, cultivating culture, and supporting people in developing the career they would like. There are a bunch of great stories, several book recommendations, and additional resources to follow up on. * Cultivating organisational culture is much like gardening: you can’t force things, but you can set the right conditions for growth. The most effective strategy is to communicate the vision and goals, lead the people, and manage the systems and organisational structure. * N26, a challenger bank based in Berlin has experienced hypergrowth over the past two years. Both the number of customers and the amount of employees have increased over threefold. This provides lots of opportunities for ownership of product and projects, and it creates unique leadership challenges. * A target operating model (TOM) is a blueprint of a firm's business vision that aligns operating capacities and strategic objectives and provides an overview of the core business capabilities, internal factors, and external drivers, strategic and operational levers. This should be shared widely within an organisation * Pat has curated a “trident operating” model for employee growth. In addition to the class individual contributor (IC) and management tracks, he believes that a third “technical leadership” track provides many benefits. * People can switch between these tracks as their personal goals change. However, this switch can be challenging, and an organisation must support any transition with effective training. * Pat recommends the following books for engineers looking to make the transition to leadership: The Manager’s Path, by Camille Fournier; Resilient Management, by Lara Hogan; Elegant Puzzle, by Will Larson; and Leading Snowflakes by Oren Ellenbogen. Pat has also written his own book, Talking with Tech Leads. * It is valuable to define organisation values upfront. However, these can differ from actual culture, which more about what behaviours you allow, encourage, and stop. * Much like the values provided by Netflix’s Freedom and Responsibility model, Pat argues that balancing autonomy and alignment within an organisation is vital for success. Managers can help their team by clearly defining boundaries for autonomy and responsibility. * Developing the skills to influence people is very valuable for leaders. Influence is based on trust, and this must be constantly cultivated. Trust is much like a bank account, if you don’t regular deposit actions to build trust, you may find yourself going overdrawn when making a deposit. This can lead to bad will and defensive strategies being employed.

 Thomas Graf on Cilium, the 1.6 Release, eBPF Security, & the Road Ahead | File Type: audio/mpeg | Duration: 00:27:05

Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. It is a CNI plugin that offers layer 7 features typically seen with a service mesh. On this week’s podcast, Thomas Graf (one of the maintainers of Cilium and co-founder of Isovalent) discusses the recent 1.6 release, some of the security questions/concerns around eBPF, and the future roadmap for the project. Why listen to this podcast: * Cilium brings eBPF to the Cloud Native World. It works across both layer 4 and a layer 7. While it started as a pure eBPF plugin, they discovered that just caring about ports was not enough from a security perspective. * Cilium went 1.0 about a year and a half ago. 1.6 is the most featured-packed release of Cilium yet. Today, it has around 100 contributors. * While Cilium can make it much easier to manage IPTables, Cilium overlaps with a service mesh in that it can do things like understand application protocols, HTTP routes, or even restrict access to specific tables in data stores. * Cilium provides both in kernel and sidecar deployments. For sidecar deployments, it can work with Envoy to switch between kernel space and userspace code. The focus is on flexibility, performance, and low overhead. * BPF (Berkeley Packet Filter) was initial designed to do filtering on data links. eBPF has the same roots but it’s now used for system call filtering, tracing, sandbox, etc. It’s grown to be a general-purpose programming language to extend the Linux kernel. * Cilium has a multi-cluster feature built-in. The 1.6 release can run in a kube-proxy free configuration. It allows fine-grain network policies to run across multiple clusters without the use of IPTables. * Cilium offers on-the-wire encryption using in-kernel encryption technology that enables mTLS across all traffic in your service fleet. The encryption is completely transparent to the application. * eBPF has been used in all production environments at Facebook since May 2017. It’s been used at places like Netflix, Google, and Reddit. There are a lot of companies who have an interest in eBPF being secure and production-ready, so there’s a lot of attention focused on fixing and resolving and security issues that arise. * 1.6 also released KVstore-free operation, socket-based load balancing, CNI chaining, Native AWS ENI mode, enhancements to transparent encryption, and more. * The plans for 1.17 is to keep raising up the stack into the socket level (to offer things like load balancing and transparent encryption at scale) and likely offering deeper security features such as process-aware security policies for internal pod traffic. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2HCGnLa You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2HCGnLa

 Yuri Shkuro on Tracing Distributed Systems Using Jaeger | File Type: audio/mpeg | Duration: 00:32:14

The three pillars of observability are logs, metrics, and tracing. Most teams are able to handle logs and metrics, while proper tracing can still be a challenge. On this podcast, we talk with Yuri Shkuro, the creator of Jaeger, author of the book Mastering Distributed Tracing, and a software engineer at Uber, about how the Jaeger tracing backend implements the OpenTracing API to handle distributed tracing. Why listen to the podcast: - Jaeger is an open-source tracing backend, developed at Uber. It also has a collection of libraries that implement the OpenTracing API. - At a high level, Jaeger is very similar to Zipkin, but Jaeger has features not available in Zipkin, including adaptive sampling and advanced visualization tools in the UI. - Tracing is less expensive than logging because data is sampled. It also gives you a complete view of the system. You can see a macro view of the transaction, and how it interacted with dozens of microservices, while still being able to drill down into the details of one service. - If you have only a handful of services, you can probably get away with logging and metrics, but once the complexity increases to dozens, hundreds, or thousands of microservices, you must have tracing. - Tracing does not work with a black box approach to the application. You can't simply use a service mesh then add a tracing framework. You need correlation between a single request and all the subsequent requests that it generates. A service mesh still relies on the underlying components handling that correlation. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2ZlvMyR You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2ZlvMyR

 Louise Poubel on the Robotic Operating System | File Type: audio/mpeg | Duration: 00:28:41

ROS is the Robotic Operating System. It’s been used by thousands of developers to prototype and create a robotic application. ROS can be found on robotics in warehouses, self-driving car companies, and on the International Space Station. Louise Poubel is an engineer working with Open Robotics. Today on the podcast, she talks about what it takes to develop software that moves in physical space, including the Sense, Think, Act Cycle, the developer experience, and architecture of ROS. Why listen to this podcast: - Writing code for robot development, you use the Sense, Think, Act Cycle. - ROS is an SDK for robotics. It provides a communication layer that enables data to flow between nodes that handle sensors, logic, and actuation. - ROS has two versions and has been around for twelve years. ROS 1 was entirely implemented in C. ROS 2 offers is a common C layer with implementations in many different languages, including Java, JavaScript, and Rust. - Released on a six-month cadence, Dashing was the latest release (May 2019). Previous releases were supported for one year, Dashing is the first LTS and will be supported for two years. - ROS 2 builds on top of the standard Data Distribution Service (DDS) that you find in mission-critical systems like nuclear power and airplanes. - Simulation is an important step in robotics. It allows you to prototype a system before deploying to a physical system. - Rviz is a three-dimensional visualizer used to visualize robots, the environments they work in, and sensor data. It is a highly configurable tool, with many different types of visualizations and plugins. It allows you to put together all your data in one place and see it. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2ZbOvrO You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2ZbOvrO

 Matt Klein on Envoy Mobile, Platform Complexity, and a Universal Data Plane API for Proxies | File Type: audio/mpeg | Duration: 00:41:24

In this podcast we sit down with Matt Klein, software plumber at Lyft and creator of Envoy, and discuss topics including the continued evolution of the popular proxy, the strength of the open source Envoy community, and the value of creating and implementing standards throughout the technology stack. We also explore the larger topic of cloud natives platforms, and discuss the tradeoffs between using a simple and opinionated platform against something that is bespoke and more configurable, but also more complex. Related to this, Matt shares his thoughts on when and how to make the decision within an organisation to embrace technology like container orchestration and service meshes. Finally, we explore the creation of the new Envoy Mobile project. The goal of this project is to expand the capabilities provided by Envoy all the way out to mobile devices powered by Android and iOS. For example, most current user-focused traffic shifting that is conducted at the edge is implemented with coarse-grained approaches via by BGP and DNS, and using something like Envoy within mobile app networking stacks should allow finer-grained control. Why listen to this podcast: - The Envoy Proxy community has grown from strength-to-strength over the last year, from the inaugural EnvoyCon that ran alongside KubeCon NA 2018, to the increasing number of code contributions from engineers working across the industry - Attempting to create a community-driven “universal proxy data plane” with clearly defined APIs, like Envoy’s XDS API, has allowed vendors to collaborate on a shared abstraction while still allowing room for “differentiated success” to be built on top of this standard Google’s gRPC framework is adopting the Envoy XDS APIs, as this will allow both Envoy and gRPC instances to be operated via a single control plane, for example, Google Cloud Platform’s Traffic Director service. - There is a tendency within the software development industry to fetishise architectures that are designed and implemented by the unicorn tech companies, but not every organisation operates at this scale. - However, there has also been industry pushback against the complexity that modern platform components like container orchestration and service meshes can introduce to a technology stack. - Using a platform within these components provides the best return on investment when an organisation’s software architecture and development teams have reached a certain size. - Function-as-a-Service (Faas)-type platforms will most likely be how engineers will interact with software in the future. Business-focused developers often do not want to interact with the platform plumbing Envoy Mobile is building on prior art, and aims to expand the capabilities provided by Envoy all the way out to mobile devices using Android and iOS. Most current end user traffic shifting is implemented with coarse-grained approaches via BGP and DNS, and using something like Envoy instead will allow finer-grained control. - Using Envoy Mobile in combination with Protocol Buffers 3, which supports annotations on APIs, can facilitate working with APIs offline, configuring caching, and handling poor networking conditions. One of the motivations for this work is that small increases in application response times can lead to better business outcomes. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/33nlGMu You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/33nlGMu

 Armon Dadgar on HashiCorp Research, the Evolution of Infrastructure Tooling, and Standardisation | File Type: audio/mpeg | Duration: 00:22:35

On this podcast, we’re talking to Armon Dadgar, co-founder and CTO of HashiCorp. Alongside Mitchell Hashimoto, Armon founded HashiCorp over six years ago, and the company has gone from strength to strength, with their open source infrastructure product suite now consisting of Consul, Nomad, Vault and Terraform. We discuss the formation of the HashiCorp research division, and explore some of the computer science research underpinning Consul and Nomad. We also cover the challenges of supporting teams when they are looking to embrace new modes of working with dynamic infrastructure, and Armon introduces the new learn.hashicorp.com educational website and accompanying community and support forums. Why listen to this podcast: - There is a lot of fundamental computer science research that underpins the HashiCorp infrastructure workflow and configuration tooling. This helps to ensure that these mission-critical tools perform as expected, and enables sound reasoning about scaling these technologies. - The HashiCorp founders recognised the value of creating an industrial research-focused department within the company even when there were only 30 staff. - The Consul service mesh and distributed key value store leverages consensus and gossip algorithms from computer science research, Raft and SWIM, respectively. The HashiCorp team contributed a novel research-based improvement to SWIM -- Lifeguard: SWIM-ing with Situational Awareness -- that was presented at the DSN academic conference - Initially HashiCorp produced a new tool every 6-12 months, focusing on filling gaps within the infrastructure workflow tooling market. Now the focus is on refining the operator/user experience of the existing tools, creating more integrations with other platforms and tooling, and facilitating engineering teams adopting these tools, via the creation of educational resources and community forums. - Standardisation within computing technology can offer many benefits, especially where interoperability is required or technology switching costs are high. Care must be taken to ensure the correct interfaces are created, and that the time is right to create appropriate abstractions. - The HashiCorp team are focusing on "marching up the stack", with the goal that a lot of the underlying "plumbing" should be hidden from, or easily configurable by, application developers. This will allow developers to focus on adding value related to their business or organisation, rather than getting stuck with managing infrastructure. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2KptB3d You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2KptB3d

 Kingsley Davies and Cat Swetel at QCon London about Ethics and Requisite Variety | File Type: audio/mpeg | Duration: 00:31:40

In this episode recorded at QCon London 2019 Shane Hastie, Lead Editor for Culture & Methods, first spoke to Kingsley Davies about ethics and then with Cat Swetel about requisite variety and being mindful of the impact our decisions have for the future. Why listen to this podcast: • The need to explore the application of technology for good • The need for ethical standards in the technology industry • Data is the new oil and it is frequently used in ways that are not in the best interest of society • Other engineering professions have codes of conduct and ethical frameworks that are mandated as part of the education process, software engineering currently has very little • Ashby’s law of requisite variety – the more options that are available to a system, the more resilient the system is applies to all aspects of our socio-technical systems • We exist in the realm of ethics – we can’t just go to work and do what you’re told. Everything we do is a choice and our choices have a huge impact on the future

 Thomas Wuerthinger on GraalVM and Optimizing Java With Ahead-of-Time Compilation | File Type: audio/mpeg | Duration: 00:25:42

The promise of Java has always been, “write once, run anywhere.” This was enabled through just-in-time compilation, which allowed developers to target a platform at compilation. But, this flexibility has given rise to comments like, “Java is slow.” What if you could compile Java to Native Code? On this podcast, we’re talking to Thomas Wuerthinger, a senior research director at Oracle Labs. Leading programming language implementation teams for Java, JavaScript, Ruby, and R. He is the architect of the Graal compiler and the Truffle self-optimizing runtime. Why listen to the podcast: - The GraalVM project was initially just a replacement for the JVM C2 just-in-time compiler, but has evolved to include support for multiple languages, as well as an ahead-of-time compiling mode. - Support for multiple languages can provide better performance for some languages, as well as making direct calls without inter-process communication. - With GraalVM’s AOT compilation, you can statically link system libraries, which allows you to run a static binary on a bare-metal Docker image, without even a Linux distribution. - The major benefits of AOT are minimized startup time, memory footprint, and packaging size. This can come with a trade-off in reduced maximum throughput and higher latency. - The GraalVM roadmap includes supporting additional platforms, such as Windows and mobile, as well as performance improvements for both the JIT and AOT compilers. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2Y2hPk2 You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2Y2hPk2

 Johnny Xmas on Web Security & the Anatomy of a Hack | File Type: audio/mpeg | Duration: 00:31:55

On this podcast, Wes talks to John Xmas. Johnny works for Kasada, a company that offers a security platform to help ensure only your users are logging into your web applications. Johnny is a well-known figure in the security space. The two discuss common attack vectors, the OWASP Top 10, and then walk through what hackers commonly do attempting to compromise a system. The show is full of advice on protecting your systems including topics around Defense in Depth, Time-Based Security, two-factor authentication, logging/alerting, security layers, and much more. Why listen to this podcast: - While there are sophisticated web attacks out there that use things like PhantomJS or Headless Chome, the vast majority of the web application attacks are the same unsophisticated scripted attacks that you always hear about. These are simple scripts using tools like curl and BurpSuite with Python or JavaScript. These simple scripts are still incredibly effective. - OWASP Top 10 really hasn’t changed all that much in the last ten years. For example, despite being the number one approach used to educate defensive engineers on how to protect their apps, SQLI (SQL Injection) is still the most common attack. We continue to repeat the same mistakes that have exposed systems for a decade now. - Phishing is by and far the quickest way to compromise a system. Defensive in Depth, security boundaries, limiting local admin rights are all things that corporations can implement to minimize the blast radius. - Attackers have hundreds of gigs of actual username/password combinations that have been exposed from all the breaches over the past few years. These are often a first step when attempting to compromise a system. It’s more often likely that they will figure out a valid email pattern for a company and then feed actual names into that pattern to go after the username. From there, brute force attacks with those usernames against libraries of passwords is a common approach. - A common approach is to go after an email login. While the email can be a treasure trove of information, it’s more about using those credentials in other places. It’s pretty common, for example, to use those credentials to get into a network with a VPN. - Captcha/reCaptcha is not very effective and preventing these brute force attacks. There are a large number of bypasses and even Mechanical Turk companies that are available to bypass these tools. What can be effective is Time Based Security because it slows the attackers down. If you can slow them down, you can make the attack say long to succeed that they’ll go somewhere else. - Once inside the network, most companies often have little security on internal systems. Multi-factor authentication, not just on the front door, but on internal systems is a huge step in the right direction. Monitoring not only for failed login attempts but, in some situations, valid login attempts (such as when a domain admin logs into a domain controller) should absolutely be used. - When it comes to application security between services within a network, the best advice is to make sure developers really understand what is trying to be accomplished by something like JWT (JSON Web Tokens). Often its the lack of understanding of what they’re actually doing that leads to system vulnerabilities. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2MSIAXG You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2MSIAXG

 Mike Milinkovich, Director of the Eclipse Foundation, Discusses the Journey to Jakarta EE 8 | File Type: audio/mpeg | Duration: 00:26:55

Today on the podcast, Wes talks with Mike Milinkovich, Executive Director of the Eclipse Foundation. The Eclipse Foundation was chosen to govern the evolution of Oracle’s Java EE to Jakarta EE. The two discuss the project, the recent news about issues with the javax namespace, the challenges around bundling a Java Runtime with Eclipse, and the path forward for Jakarta EE 9 and beyond. Why listen to this podcast: - Java EE, unlikely Java SE, has always been a multi-vendor ecosystem. It made sense for everyone for Oracle to invite their partners to be involved in the governance of the specification for Java EE for it to continue moving forward. This is the reason for moving Java EE into the Eclipse Foundation as Jakarta EE. - The current plan is for the Eclipse Foundation to get a copyright license to evolve the text of the specification and not a license to the trademarks of Java EE itself. - The javax namespace must remain as is. For it to be evolved, a different namespace must be used. The javax namespace is a trademark of Oracle. Because of this, there are quality controls that Oracle required for its evolution. Ultimately because of those controls, the Eclipse Foundation felt it was better to branch javax into a different namespace and evolve it separately solely under Jakarta EE governance. - Jakarta EE 8 is targeted to be released around Oracle Code ONE. Jakarta EE 8 will be exactly the same as Java EE 8. * The only difference is it will be licensed from Jakarta, not Oracle and only requires membership in the Working Group. - Beyond EE 8, the release cycle, the plan for moving the javax namespace (and keeping compatibility with both the old javax namespace and the new namespace), and new specifications for inclusion into Jakarta EE are still active areas of discussion. - Unrelated to the discussion of Jakarta EE (but discussed in the same board meeting), an attempt to bundle OpenJ9 with the Eclipse IDE failed because of licensing restrictions around a certified Java Runtime. OpenJ9 is certified when acquired through an IBM channel, but not when downloaded directly for us. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2HSfcfM You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2HSfcfM

 Piero Molino on Ludwig, a Code-Free Deep Learning Toolbox | File Type: audio/mpeg | Duration: 00:29:19

Ludwig is a code-free deep learning toolbox originally created and open sourced by UberAI. Today, on the podcast the creator of Ludwig Piero Molino and Wes Reisz discuss the project. The two talk about how the project works, its strengths, it’s roadmap, and how it’s being used by companies inside (and outside) of Uber. They wrap by discussing path ahead for Ludwig and how you can get involved with the project. Why listen to this podcast: • Uber AI is the research and platform team for everything AI at the company with the exception of self-driving cars. Self-driving cars are left to Uber ATG. • Ludwig allows you to specify a Tensorflow model in a declarative format that focuses on your inputs and outputs. Ludwig then builds a model that can deal with those types of inputs and outputs without a developer explicitly specifying how that is done. • Because of Ludwig’s datatype abstraction for inputs and outputs, there is a huge range of applications that can be created. For example, an input could be text and output could be a category. In this case, Ludwig will create a text classifier. An image and text input (such as a question: “Is there a dog in this image”) would output a question answering system. There are many combinations that are possible with Ludwig. • Uber is using Ludwig for text classification for customer support. • Datatypes can be extended easily with Ludwig for custom use cases. • Ludwig would love to have people contribute to the project. There are simple feature requests that are just not prioritized with the current contributor workload. It’s a great place to get involved with machine learning and gain experience with the project. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2JGA5wC You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2JGA5wC From time to time InfoQ publishes trend reports on the key topics we’re following, including a recent one on DevOps and Cloud. So if you are curious about how we see that state of adoption for topics like Kubernetes, Chaos Engineering, or AIOps point a browser to http://infoq.link/devops-trends-2019.

Comments

Login or signup comment.