The InfoQ Podcast show

The InfoQ Podcast

Summary: Software engineers, architects and team leads have found inspiration to drive change and innovation in their team by listening to the weekly InfoQ Podcast. They have received essential information that helped them validate their software development map. We have achieved that by interviewing some of the top CTOs, engineers and technology directors from companies like Uber, Netflix and more. Over 1,200,000 downloads in the last 3 years.

Join Now to Subscribe to this Podcast

Podcasts:

 Ben Sigelman, Co-Creator of Dapper & OpenTracing API, on Observability | File Type: audio/mpeg | Duration: 00:42:18

Ben Sigelman is the CEO of Lightstep and the author of the Dapper paper that spawned distributed tracing discussions in the software industry. On the podcast today, Ben discusses with Wes observability, and his thoughts on logging, metrics, and tracing. The two discuss detection and refinement as the real problem when it comes to diagnosing and troubleshooting incidents with data. The podcast is full of useful tips on building and implementing an effective observability strategy. Why listen to this podcast: - If you’re getting woke up for an alert, it should actually be an emergency. When that happens, things to think about include: when did this happen, how quickly is it changing, how did it change, and what things in my entire system are correlated with that change. - A reality that seems to be happening in our industry is that we’re coupling the move to microservices with a move to allowing teams to fully self-determine technology stacks. This is dangerous because we’re not at the stage where all languages/tools/frameworks are equivalent. - While a service mesh offers a great potential for integrations at layer 7 many people have unrealistic expectations on how much observability will be enabled by a service mesh. The service mesh does a great job of showing you the communication between the services, but often the details get lost in the work that’s being done inside the service. Service owners need to still do much more work to instrument applications. - Too many people focus on the 3 Pillars of Observability. While logs, metrics, and tracing are important, observability strategy ought to be more focused on the core workflows and needs around detection and refinement. - Logging about individual transactions is better done with tracing. It’s unaffordable at scale to do otherwise. - Just like logging, metrics about individual transactions are less valuable. Application level metrics such as how long a queue is are metrics that are truly useful. - The problem with metrics are the only tools you have in a metrics system to explain the variations that you’re seeing is grouping by tags. The tags you want to group by have high cardinality, so you can’t group them. You end up in a catch 22. - Tracing is about taking traces and doing something useful with them. If you look at hundreds or thousands of tracing, you can answer really important questions about what’s changing in terms of workloads and dependencies about a system with evidence. - When it comes to serverless, tracing is more important than ever because everything is so ephemeral. Node is one of the most popular serverless languages/frameworks and, unfortunately, also one of the hardest of all to trace. - The most important thing is to make sure that you choose something portable for the actual instrumentation piece of a distributed tracing system. You don’t want to go back and rip out the instrumentation because you want to switch vendors. This is becoming conventional wisdom. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2PPIdeE You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2PPIdeE

 Ashley Williams on Web Assembly, Wasi, & the Application Edge* | File Type: audio/mpeg | Duration: 00:40:40

- Web Assembly (wasm) is a set of instructions or a low-level byte code that is a target for higher level languages. It was added to the browser because it was a portion of the web platform that many felt was just missing. - Wasm is still a young technology. It performs really well for computationally intensive applications and also offers performance consistency (because it lacks a garbage collector). - Bootstrapping an application using the Rust toolchain looks like: pull down a template, export a function using an attribute (defines that you want to access this function from JavaScript), and run a tool called wasm-pack (compiles it into Web Assembly and then runs a tool called wasm-bindgen that generated Rust types for Wasm). Then you can talk to that binary as if it was written in JavaScript in your code. - Cloudflare workers allow JavaScript that you might have written for a server to be written and distributed at the application edge (or close to the end user). It uses a similar model as serverless architecture platforms. - Interesting use cases such as A/B testing, DDoS prevention, server-side rendering, or traffic shaping can be done at the edge. - Wasm is an approach to bringing full application experiences to the edge. - Wasi (Web Assembly System Interface) is a standardized interface for running Web Assembly for places that are outside of the web. Fastly recently released a pure Web Assembly runtime for their edge that is built on top of Wasi called Lucet (allows access to lower level things at the edge like sockets and UDP). - Zoom has a web client written in Web Assembly. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2Dw3jcH You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2Dw3jcH

 Bryan Cantrill on Rust and Why He Feels It’s The Biggest Change In Systems Development in His Career | File Type: audio/mpeg | Duration: 00:38:41

Bryan Cantrill is the CTO of Joyent and well known for the development of DTrace at Sun Microsystems. Today on the podcast, Bryan discusses with Wes Reisz a bit about the origins of DTrace and then spends the rest of the time discussing why he feels Rust is the “biggest development in systems development in his career.” The podcast wraps with a bit about why Bryan feels we should be rewriting parts of the operating system in Rust. Why listen to the podcast: • DTrace came down to a desire to use Dynamic Program Text Modification to instrument running systems (much like debuggers do) and has its origins to when Bryan was an undergraduate. • When a programming language delivers something to you, it takes it from you in the runtime. The classic example of this is garbage collection. The programming language gives you the ability to use memory dynamically without thinking of how the memory is stored in the system, but then it’s going to exact a runtime cost. • One of the issues with C is that it just doesn’t compose well. You can’t just necessarily pull a library off the Internet and use it well. Everyone’s C is laden with some many idiosyncrasies on how it’s used and the contract on how memory is used. • Ownership is statically tracking who owns the structure. It’s ownership and the absence of GC that allows you to address the composability issues found in C. • It’s really easy in C to have integer overflow which leads to memory safety issues that can be exploited by an attacker. Rust makes this pretty much impossible because it’s very good at how it determines how you use signed vs unsigned types. • You don’t want people solving the same problems over and over again. You want composability. You want abstractions. What you don’t want is where you’ve removed so much developer friction that you develop code that is riddled with problems. For example, it slows a developer down to force them to run a linter, but it results in better artifacts. Rust effective builds a lot of that linter checking into the memory management/type checking system. • While there’s some learning curve to Rust. It’s not that bad if you realize there are several core concepts you need to understand to understand Rust. Rust is one of those languages that you really need to learn in a structured way. Sit down with a book and learn it. • Rust struggles when you have objects that are multiply owned (such as a Doubly Linked List). It’s because it doesn’t know who owns what. While Rust supports unsafe operations, you should resist the temptation to develop with a lot of unsafe operations if you want the benefits of what Rust offers developers. • Firmware is a great spot for growing Rust development in a process of replacing bits of what we think of as the operating system. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2uZ5QHZ You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2uZ5QHZ

 Oracle Labs’ Duncan Macgregor on Graal, TruffleRuby, & Project Loom | File Type: audio/mpeg | Duration: 00:29:41

Duncan Macgregor speaks with Wes Reisz about the work being done on the experimental Graal Compiler. He talks about the use cases and where the new JIT compiler excels really well (compared to C2). In addition, Duncan talks about the relationship of Graal to Truffle. The two then discuss a language Duncan works on at OracleLabs (TruffleRuby) that is being implemented on the stack. Finally, the podcast wraps with a discussion of Project Loom and its relationship to TruffleRuby and Graal. Why listen to this podcast: - Graal is a replacement for the JVM’s C2 JIT compiler. It was tracked with JEP 295 (Ahead-of-Time Compilation) and included in Java 9. As of Java 10, Graal is experimental for the Linux x64 platform. - Graal is written in Java and excels at implementing code that takes a functional approach to solving problems (such as Scala). It can also offer improvements / optimizations for other languages (including other non-traditional JVM languages such as C and Ruby). - Truffle is a language implementation framework used my Graal. The idea is rather than having to write a compiler for your language, you can write an interpreter. This gives you the ability to write specializations at a higher level of abstraction that yields performance and better understanding. - Truffle’s architecture and design allows things like allowing unrelated languages to do interop, garbage collection, and types. - TruffleRuby and JRuby started off with a lot of shared code. They’ve branched and JRuby today focuses on integration with other Java classes. It compiles to bytecode and then relies on the C2 JIT to run on the JVM. TruffleRuby doesn’t try to compile to Java classes and only uses the Truffle framework to compile the things it needs. TruffleRuby is able to use most of native Ruby. - Project Loom is a project that aims to add one shot delimited continuations to the JVM. It leverages fibers (a much lighter concurrency primitive than threads) and can literally run millions of them.

 Rod Johnson Chats about the Spring Framework Early Days, Languages Post-Java, & Rethinking CI/CD | File Type: audio/mpeg | Duration: 00:34:18

Today on The InfoQ Podcast, Wes talks with Rod Johnson. Rod is famously responsible for the creation of the Spring Framework. The two talk about the early years of the framework and provides some of the history of its creation. After discussing Spring, Wes and Rod discuss languages he’s been involved with since Java (these include Scala and TypeScript). He talks a bit about what he liked (and didn’t like) about each. Finally, the two wrap by discussing Atomist and how they’re trying to change the idea of software delivery from a statically defined pipeline (located in individual repositories) to an event hub that drives a series of actions for software delivery. He describes this as creating an API for your software. Why listen to this podcast: - The initial origins of the Spring Framework really came about through a process of trying to write a really great book about J2EE in 2002. It was through that process that Rod Johnson found he felt there was a better way and ultimately lead to the creation of the Spring Framework. - What started as examples and references, became the Spring Framework. By 2005 there were about 2 million downloads of the Spring Framework. After leaving VMWare in 2013, Rod spent several years working with Scala. One of the elegant features that really attracted Rod to Scala was how everything is an expression. One of the things he didn’t like was an affinity to overly complex approaches to problem solving. - Today at Atomist, Rod does a lot of work in Node. He really enjoys the robust extra layer of typing over a dynamic language and the ability to escape to JavaScript if needed (similar to escaping types with reflection in Java found in the internals of the Spring Framework). - Atomist, the company he founded after leaving VMWare, is rethinking CI/CD from a static pipeline defined in every repository to an event-driven system that defines how to respond to specific events (such as a push from Git). For example, all pushes with Spring Boot can be configured to be scanned with SonarQube or because a push has kubespec it might get deployed to a K8 cluster. He describes this as creating an API for your software. - One of the reasons Atomist integrates so tightly with Slack (and other similar messaging platforms) is because it allows developers to shape their own relevant messages. By joining (or leaving channels), people are able to subscribe to only the information they actually want. Meeting developers inside Slack is an important interface for Atomist. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2FxK3xf You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2FxK3xf

 Katharine Jarmul and Ethical Machine Learning | File Type: audio/mpeg | Duration: 00:32:29

Today on The InfoQ Podcast, Wes talks with Katharine Jarmul about privacy and fairness in machine learning algorithms. Katharine discusses what’s meant by Ethical Machine Learning and some things to consider when working towards achieving fairness. Katharine is the Co-Founder at KIProtect a machine learning security and privacy firm based in Germany and is one of the three keynotes at QCon.ai. Why listen to this podcast: - Ethical machine learning is about practices and strategies for creating more ethical machine learning models. There are many highly publicized/documented examples of machine learning gone awry that show the importance of the need to address ethical machine learning. - Some of the first steps to prevent bias in machine learning is awareness. You should take time to identify your team goals and establish fairness criteria that should be revisited over time. This fairness criteria then can be used to establish the minimum fairness criteria allowed in production. - Laws like GDPR in the EU and HIPAA in the US provide privacy and security to users and have legal implications if not followed. - Adversarial examples (like the DolphinAttack that used subsonic sounds to activate voice assistants) can be used to fool a machine learning model into hearing or seeing something that’s not there. More and more machine learning models are becoming an attack vector for bad actors. - Machine learning is always an iterative process. - Zero-Knowledge Computing (or Federated Learning) is an example of machine learning at the edge and is designed to respect the privacy of an individual’s information. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2TD3nSd You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2TD3nSd

 Grady Booch on Today’s Artificial Intelligence Reality and What it Means for Developers | File Type: audio/mpeg | Duration: 00:32:32

Today on The InfoQ Podcast, Wes Reisz speaks with Grady Booch. Grady is well known as the co-creator of UML, an original member of the design patterns movement, and now work he’s doing around Artificial Intelligence. On the podcast today, the two discuss what today’s reality is for AI. Grady answers questions like what does an AI mean to the practice of writing software and around how he seems it impact delivering software. In addition, Grady talks about AI surges (and winters) of over the years, the importance of ethics in software, and host of other related questions. Why listen to this podcast: - There have been prior ages of AI that has lead to immediate winters of where reality set in. It stands to reason, there will be a version of an AI winter that follows today’s excitement around deep learning. - AIs are beginning to look at the code for testing edge cases in software and do things such as looking over your shoulder and identifying patterns in the code that you write. - AIs will remove tedium for software developers; however, software developer is (and will remain) a labor-intensive activity for decades to come.nAI is another bag of tools in a larger systems activity. - Much of the AI developers are young white men from the United States. That has a number of inherent biases in this fact. There are several organizations that are focused on combating some of these biases and bringing ethical learning into the field. This is important for us to be aware of and encourage. - The traditional techniques of systems engineering we know for building non-AI systems will still apply. AI’s are pieces of larger systems. That might be really interesting parts, but it’s just a part of a larger system that requires a lot of non-AI engineering use cases. - Early machine learning systems were mostly learn and forget systems. You teach them, you deploy them, and you walk away. Today, we do continuous learning and we need to integrate these new models into the delivery pipeline. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2SjJOsq You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2SjJOsq

 Joe Beda on Kubernetes & the CNCF | File Type: audio/mpeg | Duration: 00:30:12

Today on The InfoQ Podcast, Wes talks with Joe Beda. Joe is one of the co-creators of Kubernetes. What started in the fall of 2013 with Craig McLuckie, Joe Beda, and Brendan Burns working on cloud infrastructure has become the default orchestrator for cloud native architectures. Today on the show, the two discuss the recent purchase of Heptio by VMWare, the Kubernetes Privilege Escalation Flaw (and the response to it), Kubernetes Enhancement Proposals, the CNCF/organization of Kubernetes, and some of the future hopes for the platform. Why listen to this podcast: - Heptio, the company Joe and Craig McLuckie co-founded, viewed themselves as not a Kubernetes company, but more of a cloud native company. Joining VMWare allowed the company to continue a mission of helping people decouple “moving to cloud/taking advantage of cloud” patterns (regardless of where you’re running). - Re:Invent 2017 when EKS was announced was a watershed moment for Kubernetes. It marked a time where enough customers were asking for Kubernetes that the major cloud providers started to offer first-class support. - Kubernetes 1.13 included a patch for the Kubernetes Privilege Escalation Flaw Patch. While the flaw was a bad thing, it demonstrated product maturity in the way the community-based security response. - Kubernetes has an idea of committees, sigs, and working groups. Security is one of the committees. There were a small group of people who coordinated the security response. From there, trusted sets of vendors validated and test patches. Most of the response is based on how many other open source projects handle security response. - Over the last couple of releases, Kubernetes has introduced a Sig Architecture special interest group. It’s an overarching review for changes that sweep across Kubernetes. As part of Sig Architecture, the Kubernetes community has introduced Kubernetes Enhancement Proposal process (or KEPs). It’s a way for people to propose architectural changes to Kubernetes. - The goal of the CNCF is to curate and provide support to a set of projects (of which Kubernetes is one). The TOC (Technical Oversight Committee) decides which projects are going to be part of the CNCF and how those projects are supported. - Kubernetes was always viewed by the creators as something to be build on. It was never really viewed as the end goal. You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq

 Megan Cartwright on Building a Machine Learning MVP at an Early Stage Startup | File Type: audio/mpeg | Duration: 00:32:14

Today on the InfoQ Podcast, Wes speaks with ThirdLove’s Megan Cartwright. Megan is the Director of Data Science for the personalized bra company. In the podcast, Megan first discusses why their customers need a more personal experience and how their using technology to help. She focuses quite a bit of time in the podcast discussing how the team got to an early MVP and then how they did the same for getting to an early machine learning MVP for product recommendations. In this later part, she discusses decisions they made on what data to use, how to get the solution into production quickly, how to update/train new models, and where they needed help. It’s a real early stage startup story of a lean team leveraging machine learning to get to a practical recommendations solution in a very short timeframe. Why listen to this podcast: - The experience for women selecting bras is poor experience characterized by awkward fitting experiences and an often uncomfortable product that may not even fit correctly. ThirdLove is a company built to serve this market. - ThirdLove took a lean approach to develop their architecture. It’s built with the Parse backend. The leveraged Shopify to build the site. The company’s first recommender system used a rules engine embedded into the front end. After that, they moved to a machine learning MVP with a Python recommender service that used a Random Forest algorithm in SciKit-Learn. - Despite having the data for 10 million surveys, the first algorithms only need about 100K records to be trained. The takeaway is you don’t have to have huge amounts of data to get started with machine learning. - To initially deploy their ML solution, ThirdLove first shadowed all traffic through the algorithm and then compared it to what was being output by the rules engine. Using this along with information on the full customer order lifecycle, they validated the ML solution worked correctly and outperformed the rules engine. - ThirdLove’s machine learning story shows that you move towards a machine learning solution quickly by leveraging your own network and using tools that may already familiar to your team. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2G9RnQn You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2G9RnQn

 Lynn Langit on 25% Time and Cloud Adoption within Genomic Research Organizations | File Type: audio/mpeg | Duration: 00:26:38

Lynn Langit is a consulting cloud architect who holds recognitions from all three major cloud vendors on her contributions to their respective communities. On today’s podcast, Wes talks with Lynn about a concept she calls 25% time and a project it led her to become involved within genomic research. 25% time is her own method of learning while collaborating with someone else for a greater good. A recent project leads her to become involved with the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia. Through cloud adoption and some lean startup practices, they were able to drop the run time for a machine learning algorithm against a genomic dataset from 500 hours to 10 minutes. Why listen to this podcast: - 25% time is a way to learn, study, or collaborate with someone else for a greater good. It’s unbilled time in the service of offers. Using the idea of 25% time along with some personal events that occurred in her life, Lynn became involved with genomic researchers in Australia. - Price of genomic sequencing has dropped. The price drop has enabled researchers to create huge repositories of genomic data; however, it was mostly on-prem. The idea of building data pipelines was pretty new in the genome community. Additionally, the genome itself is 3 billion data points. A variant of as little at 10-15 variants can be statistically significant. - The challenge was to leverage cloud resources. To gain a quick win and buy-in for Commonwealth Scientific and Industrial Research Organisation (or CSIRO an independent Australian federal government agency) for cloud adoption, a first step was to capture interest in the idea. So the team stored their reference data in the cloud and enabled access via a Jupyter Notebook. - They demonstrated a use case against the genomic data set leveraging a synthetic phenotype (or a fake disease) called hipsterdom. The solution became a basis for global discussion that got more people involved in the community. - By leveraging cloud resources, the CSIRO was able to get a run their dataset that took 500 hours against an on-prem Spark cluster to 10 minutes. - Learning new programming language has unseen benefits. For example, Ballerina (a language written as an integration language between APIs) interested Lynn because of its live visual diagrams; however, benefited her with some of the cloud pipelines because of its ability to produce YAML files. You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2T2LZBQ

 Charles Humble and Wes Reisz Take a Look Back at 2018 and Speculate on What 2019 Might Have in Store | File Type: audio/mpeg | Duration: 00:35:18

In this podcast Charles Humble and Wes Reisz talk about autonomous vehicles, GDPR, quantum computing, microservices, AR/VR and more. * Waymo vehicles are now allowed to be on the road in California running fully autonomous; they seem to be a long way ahead in terms of the number of autonomous miles they’ve driven, but there are something like 60 other companies in California approved to test autonomous vehicles. * It seems reasonable to assume that considerably more regulation around privacy will appear over the next few years, as governments and regulators grapple with not only social media but also who owns the data from technology like AR glasses or self-driving cars. * We’ve seen a huge amount of interest in the ethical implications of technology this year, with Uber getting into some regulatory trouble, and Facebook being co-opted by foreign governments for nefarious purposes. As software becomes more and more pervasive in people's lives the ethical impact of what we all do becomes more and more profound. * Researchers from IBM, the University of Waterloo, Canada, and the Technical University of Munich, Germany, have proved theoretically that quantum computers can solve certain problems faster than classical computers. * We’re also seeing a lot of interest around human computer interaction - AR, VR, voice, neural interfaces. We had a presentation at QCon San Francisco from CTRL-labs, who are working on neural interfaces - in this case interpreting nerve signals - and they have working prototypes. Much like touch this could open up computing to another whole group of people.

 Java Language Architect Brian Goetz on Java and the JDK | File Type: audio/mpeg | Duration: 00:41:18

On this week’s podcast, Wes Reisz talks with Brian Goetz. Brian is the Java Language Architect at Oracle. The two start with a discussion on what the six-month cadence has meant to the teams developing Java. Then move to a review of the features in Java 9 through 12. Finally, the two discuss the longer-term side projects (such as Amber, Loom, and Valhalla) and their role in the larger release process for the JDK. * The JVM’s sixth-month cadence changed the way the JDK is delivered and planned. While it definitely provides more rapid delivery at expected intervals, the release train approach turned out to also improve flexibility and efficiency. * Oracle JDK and OpenJDK are almost identical. Most of the JDK distributions are forks from OpenJDK with different bug fixes and backports applied. So the difference between the distributions now is largely which bug fixes are picked up. * Local Variable inference (which was released as part of Java 10) illustrated the tension on making changes to the language. Many people wanted the change, but many others felt it would enable people to write bad code. Oracle had to balance the two views when making the change. * The number of Java versions allow finer grain decision making on what is appropriate for an application. With the adoption of containers, applications are bundled with an exact JDK version rather than having to use one from a systems level. The different versions give developers more options. * Incubating features are new libraries added to the JDK. They were offered starting with Java 9 as a way for people to test and offer feedback more rapidly. With Java 12, preview features will be released. Preview features are similar but are core platform and language features. * Shenandoah and ZGC are both low latency garbage collectors. They originally came from different sources. While both garbage collectors are similar, each has different performance characteristics under different workloads. The two garbage collectors represent options available to JVM developers. * Most non-trivial JDK features take more than six months to develop. Longer term side projects like Amber, Loom, Valhalla are where these features are developed prior to being released with a version of the JDK. The projects range from language enhancements to concurrency work.

 Tanya Reilly on Site Reliability Engineering and the Evolution of the New York City Fire Code | File Type: audio/mpeg | Duration: 00:32:27

This week on the InfoQ Podcast, Wes Reisz talks to Tanya Reilly (Principal Engineer at Squarespace and previously a staff SRE at Google). Tanya discusses her research into how the fire code evolved in New York and draws on some of the parallels she sees in software. Along the way, she discusses what it means to be an SRE, what effective aspects of the role might look like, and her opinions on what we as an industry should be doing to prevent disasters. This podcast features discussion on paved roads, prevention, testing, firefighting (in software), and reliability questions to ask throughout the software lifecycle. Why listen to this podcast: - Teams increasingly are responsible for the entire software lifecycle. When this happens, they think about the software differently because they know their the ones that will get paged if it fails. This idea is at the core of the “You Build It, You Run It” philosophy in DevOps. - The role of SRE is to define how to do things in a really reliable way. The focus is to make the majority of the operations work go away, and, for the things that can’t go away, it’s as easy as possible. - At the very start of a project (when you’re writing the initial design), you should be thinking about the dependencies for a system and how will those that follow with be able to determine that. A great way to do this is to offer an API that people will want to use and then instrument it. - We can learn a lot from the growth of fire safety regulations as metaphors for software, including: fireproof interior walls, socializing best practices, software inspections, and circuit breakers are all examples. - The work SREs do varies in many places. SREs range from making recommendations on patterns to library creators in other areas. Occasionally, SREs are firefighters of last resort. In these cases, they’re the last resort though. - We use error budgets and SLOs to quantify how many much risk we’re comfortable taking. It’s used to inform how much less (or more risk) we’re willing to take on. - We need to consider software reliability throughout the full cycle of software development. When you build systems. Think about as if there will not be someone on call for it . You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq

 Jason Maude on Building a Modern Cloud-Based Banking Startup in Java | File Type: audio/mpeg | Duration: 00:36:18

On today’s podcast, Wes Reisz talks with Jason Maude of Starling Bank. Starling Bank is a relatively new startup in the United Kingdom working in the banking sector. The two discuss the architecture, technology choices, and design processes used at Starling. In addition, Maude goes into some of the realities of building in the cloud, working with regulators, and proven robustness with practices like chaos testing. Why listen to this podcast: - Starling Bank was created because the government lowered the barrier to entry for banking startups in reaction to previous industry bailouts. - The system is composed of around 19 applications hosted on AWS and running Java and backed by a PostgreSQL database. - These applications are not monolithic but are focused around common functionality (such as a Card or Payment Service). - Java was chosen primarily because of its maturity and long term viability/reliability in the market. - The heart of Starling is every action the system takes happens at least once and at most once. To help with these rules, everything in their system uses as a correlation id (UUID) and are used to make sure these two rules are met. You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq

 Martin Fowler Discusses New Edition of Refactoring, Along With Thoughts on Evolutionary Architecture | File Type: audio/mpeg | Duration: 00:32:55

Martin Fowler chats about the work he’s done over the last couple of years on the rewrite of the original Refactorings book. He discusses how this thought process has changed and how that’s affected the new edition of the book. In addition to discussing Refactors, Martin and Wes discuss his thoughts on evolutionary architecture, team structures, and how the idea of refactors can be applied in larger architecture contexts. Why listen to this podcast: - Refactoring is the idea of trying to identify the sequence of small steps that allows you to make a big change. That core idea hasn’t changed. - Several new refactorings in the book deal with the idea of transforming data structures into other data structures, Combine Functions into Transform for example. - Several of the refactorings were removed or not added to the book in favor of adding them to a web edition of the book. - A lot of the early refactorings are like cleaning the dirt off the glass of a window. You just need them to be able to see where the hell you are and then you can start looking at the broader ones. - Refactorings can be applied broadly to architecture evolution. Two recent posts How to break a Monolith into Microservices, by Zhamak Dehghani, and How to extract a data-rich service from a monolith by Praful Todkar on MartinFowler.com deal with this specifically. - Evolutionary architecture is a broad principle that architecture is constantly changing. While related to Microservices, it’s not Microservices by another name. You could evolve towards or away from Microservices for example. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2QbdHej You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2QbdHej

Comments

Login or signup comment.