Learning Machines 101 show

Learning Machines 101

Summary: Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
  • Copyright: Copyright (c) 2014-2019 by Richard M. Golden. All rights reserved.

Podcasts:

 LM101-081: Ch3: How to Define Machine Learning (or at Least Try) | File Type: audio/mpeg | Duration: 37:20

This podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” which discusses how to formally define machine learning algorithms. A learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph w implicitly specified by the objective function. Also, the new book “The Practioner’s Guide to Graph Data” is reviewe

 LM101-080: Ch2: How to Represent Knowledge using Set Theory | File Type: audio/mpeg | Duration: 31:43

This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”.

 LM101-079: Ch1: How to View Learning as Risk Minimization | File Type: audio/mpeg | Duration: 26:07

This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful because it provides a framework for not only understanding existing algorithms but for suggesting new algorithms for specific applications

 LM101-078: Ch0: How to Become a Machine Learning Expert | File Type: audio/mpeg | Duration: 39:18

This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com

 LM101-077: How to Choose the Best Model using BIC | File Type: audio/mpeg | Duration: 24:15

LM101-077: How to Choose the Best Model using BIC

 LM101-076: How to Choose the Best Model using AIC and GAIC | File Type: audio/mpeg | Duration: 28:17

The precise semantic interpretation of the Akaike Information Criterion (AIC) and Generalized Akaike Information Criterion (GAIC) for selecting the best model are provided, explicit assumptions are provided for the AIC and GAIC to be valid, and explicit formulas are provided for the AIC and GAIC so they can be used in practice. AIC and GAIC provide a way of estimating the average prediction error of your learning machine on test data without using test data or cross-validation methods.

 LM101-075: Can computers think? A Mathematician's Response (remix) | File Type: audio/mpeg | Duration: 36:26

In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines.

 LM101-074: How to Represent Knowledge using Logical Rules (remix) | File Type: audio/mpeg | Duration: 19:22

The challenges of representing knowledge using rules are discussed. Specifically, these challenges include: issues of feature representation, having an adequate number of rules, obtaining rules that are not inconsistent, and having rules that handle special cases and situations. To learn more, visit: www.learningmachines101.com

 LM101-073: How to Build a Machine that Learns to Play Checkers (remix) | File Type: audio/mpeg | Duration: 24:58

This is a remix of the original second episode Learning Machines 101 which describes in a little more detail how the computer program that Arthur Samuel developed in 1959 learned to play checkers by itself without human intervention using a mixture of classical artificial intelligence search methods and artificial neural network learning algorithms. The podcast ends with a book review of Professor Nilsson’s book: “The Quest for Artificial Intelligence: A History of Ideas and Achievements”.

 LM101-072: Welcome to the Big Artificial Intelligence Magic Show! (Remix of LM101-001 and LM101-002) | File Type: audio/mpeg | Duration: 22:07

This podcast is basically a remix of the first and second episodes of Learning Machines 101 and is intended to serve as the new introduction to the Learning Machines 101 podcast series. The book "Computation as Done by Brains and Machines" by Professor James A. Anderson is briefly reviewed. For more information, please visit: www.learningmachines101.com 

 LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets | File Type: audio/mpeg | Duration: 31:40

This episode of Learning Machines 101 explains how to use first-order logic and Markov logic nets to represent common sense knowledge in machine learning algorithms links to free software for implementing Markov logic nets and a free database of common-sense knowledge is provided.

 LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding | File Type: audio/mpeg | Duration: 32:04

This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding for: improving online communications, identifying terrorists, improving lie detector tests, improving athletic performance, and designing smart advertising which looks at a customer’s face to determine if they are bored or interested. The machine learning text “Pattern Recognition and Machine Learning” is reviewed.

 LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference? | File Type: audio/mpeg | Duration: 23:20

This 69th episode of Learning Machines 101 provides a short overview of the 2017 Neural Information Processing Systems conference with a focus on the development of methods for teaching learning machines rather than simply training them on examples. In addition, a book review of the book “Deep Learning” is provided. 

 LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms | File Type: audio/mpeg | Duration: 21:49

Simple mathematical formulas are presented that ensure convergence of a generated sequence of parameter vectors which are updated using an iterative algorithm consisting of adding a stepsize number multiplied by a search direction vector to the current parameter values and repeating this process. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. Please visit:  www.learningmachines101.com

 LM101-067: How to use Expectation Maximization to Learn Constraint Satisfaction Solutions (Rerun) | File Type: audio/mpeg | Duration: 25:40

In this episode we discuss how to learn to solve constraint satisfaction inference problems. The goal of the inference process is to infer the most probable values for unobservable variables. These constraints, however, can be learned from experience. Specifically, the important machine learning method for handling unobservable components of the data using Expectation Maximization is introduced. Check it out at: www.learningmachines101.com

Comments

Login or signup comment.