Let's Learn About A.I. Episode 6 - Graphs, Trees and Complexity

Hi everyone!
We’re back after the holidays, and our first episode of the new year discusses some math and computer science topics that will come up frequently in future episodes.

We discuss graphs first, which are a data structure used to store relationships between objects. You’ve probably heard the term “social graph” used to describe the friendships in a social network like Facebook or Twitter. These structures pop up frequently in computer science, the natural sciences, and in other areas. I give an example of when a graph might be useful, and try to clarify when it may not be useful.

https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)

https://en.wikipedia.org/wiki/Graph_(abstract_data_type)

After graphs, we go to a specific subset of graphs, called trees. Whereas a graph can look like a spider’s web, or a grid of streets, all trees look very similar - much like a family tree or an org chart of a large company. I discuss when and why trees are appropriate.

https://en.wikipedia.org/wiki/Tree_(data_structure)

https://medium.freecodecamp.org/all-you-need-to-know-about-tree-data-structures-bceacb85490c

Finally, we discuss computational complexity and algorithm analysis. These topics come up frequently when writing code to solve interesting problems. Computational complexity theory is the study of classifying problems by their complexity (in terms of run time, memory used, etc). Algorithm analysis is concerned with finding the amount of time, memory or other resources needed for an algorithm, usually as a function of the size of the input. Together, these tools allow for structured reasoning about the complexity of problems, and what problems are feasible or unfeasible, given our current understanding and current hardware. Interestingly, “hard” math problems that can not be solved by our current computing resources are actually the foundation for secure internet communication - cryptographers rely on what are called trap-door functions to create secure encryption algorithms.

https://en.wikipedia.org/wiki/Computational_complexity_theory

https://en.wikipedia.org/wiki/Analysis_of_algorithms

https://en.wikipedia.org/wiki/Trapdoor_function

https://en.wikipedia.org/wiki/Discrete_logarithm

I hope you all enjoy the episode, and I will talk to you soon!
Nick

Lets Learn About A.I. Episode 5 - Introduction to Agents, Part 2

Hi Everyone!

Welcome back for another episode. This is continuing the topic from last time - what exactly do we mean when we use the term agent, and how are they designed? So we cover some categories of agents, and some real life examples. Finally, we talk about one of my favorite topics - agents that learn!

Next episode we start learning the more technical stuff, so buckle up!

Nick

wikipedia page that describes the categories discussed in this episode:

https://en.wikipedia.org/wiki/Intelligent_agent

Lets Learn About A.I. Episode 4 - Introduction to Agents, Part 1

Hi everyone!

We’re back after a long hiatus! In this episode, we start to introduce the more technical definition of an agent, and how it interacts with its environment. I also discuss the topic of how to grade an agent (rather abstractly), and why the ability to learn is important for something to be considered autonomous, or intelligent. Next episode, we will talk about some paradigms of how an agent can be implemented, and how the agent can learn from its environment. Hopefully these two will be the most boring episode in the show :)


Nick

Resources:

Russell & Norvig - Artificial Intelligence: A Modern Approach

https://en.wikipedia.org/wiki/Intelligent_agent

Lets Learn About A.I. Episode 3 - Intro to Artificial Intelligence

Hi guys!
Episode 3 is here, and we cover some of the basics - what is an artificial intelligence, and what are the current areas being researched. We talked about differing approaches to A.I. Whether the goal should be creating a human-like intelligence, or a ration intelligence, for instance, or whether the thought process is important, or only the result (action). We also discuss some of the major fields of research inside artificial intelligence: natural language processing, computer vision, knowledge representation, knowledge processing, machine learning and robotics. 

I hope you enjoy the episode! I

Resources:

The book:

Russell & Norvig, Artificial Intelligence: A Modern Approach (2008) http://aima.cs.berkeley.edu/

Turing's original paper on the imitation game:

https://www.csee.umbc.edu/courses/471/papers/turing.pdf

Topics that we covered:

cognitive biases: https://en.wikipedia.org/wiki/List_of_cognitive_biases

logical forms: https://en.wikipedia.org/wiki/Logical_form

complexity theory: https://en.wikipedia.org/wiki/Computational_complexity_theory

 

Lets Learn About A.I. Episode 2 - History of A.I.

Hi guys!

Episode 2 is here. I cover a bit about what A.I. is, and dive right into artificial intelligence throughout recorded human history. I am not going to have any show notes for this one since there isn't much to follow along with, but I am posting my resources, so you have a place to start doing a little research yourself, if you'd like. Hope you enjoy the episode! 

- Nick

Resources:

websites:

https://en.wikipedia.org/wiki/Artificial_intelligence

https://en.wikipedia.org/wiki/History_of_artificial_intelligence

 

essays:

https://rodneybrooks.com/forai-the-origins-of-artificial-intelligence/

http://people.csail.mit.edu/brooks/idocs/DartmouthProposal.pdf

 

books:

Russell & Norvig, Artificial Intelligence: A Modern Approach (2008) http://aima.cs.berkeley.edu/

Lucci & Kopec, Artificial Intelligence in the 21st Century (2012)