Unsupervised Thinking

  • Autor: Vários
  • Narrador: Vários
  • Editor: Podcast
  • Duración: 53:20:43
  • Mas informaciones

Informações:

Sinopsis

A podcast about neuroscience, artificial intelligence, and science more broadly, run by a group of computational neuroscientists.

Episodios

  • Models of the Mind: How physics, engineering and mathematics have shaped our understanding of the brain

    16/06/2021 Duración: 01h18min

    Grace wrote a book! And she talked to Brain Inspired host Paul Middlebrooks about it. The book is about the many different ways mathematical methods have influenced neuroscience, from models of single cells all the way up to equations to explain behavior. You can learn more about the book and how to get it in ebook, audiobook, and hard cover worldwide by visiting tinyurl.com/h9dn4bw7 On this cross-posting of Brain Inspired, Grace talks about the book and the field of computational neuroscience more generally. Give it a listen and go check out other episodes of Brain Inspired for more great conversations.

  • E50: Brain Organoids

    30/10/2019 Duración: 01h28s

    Most neuroscience research takes place in a full, live animal. But brain organoids are different. Brain organoids are three-dimensional blobs of brain grown from human stem cells and they offer novel access to the study of human brain development. On this episode we go beyond our computational comfort zone to talk about the history of stem cells, the potion of chemicals needed to get these little blobs to grow, and the extent to which they mimic features of the human brain when they do. We also discuss the promise of studying and treating disease through personalized organoids, and how this gets hard for higher level disorders like schizophrenia. Then we get into questions of embodiment and if giving these organoids more means to interact with the world would make them better models of the brain and of information processing. Finally we get to the ethics of it all, and find that bioethicists these days are actually chill AF. Throughout, we find out that Josh is not surprised by any of this, and we tackle the

  • E49: How Important is Learning?

    01/10/2019 Duración: 01h06min

    The age-old debate of nature versus nurture is now being played out between artificial intelligence and neuroscience. The dominant approach in AI, machine learning, puts an emphasis on adapting processing to fit the data at hand. Animals, on the other hand, seem to have a lot of built in structure and tendencies, that mean they function well right out of the womb. So are most of our abilities the result of genetically-encoded instructions, honed over generations of evolution? Or are our interactions with the environment key? We discuss the research that has been done on human brain development to try to get at the answers to these questions. We take about the compromise position that says animals may be "born to learn"---that is, innate tendencies help make sure the right training data is encountered and used efficiently during development. We also get into what all this means for AI and whether machine learning researchers should be learning less. Throughout, we ask if humans are special, argue that developm

  • E48: Studying the Brain in Light of Evolution

    29/08/2019 Duración: 59min

    The brain is the result of evolution. A lot of evolution. Most neuroscientists don't really think about this fact. Should we? On this episode we talk about two papers---one focused on brains and the other on AI---that argue that following evolution is the path to success. As part of this argument, they make the point that, in evolution, each stage along the way needs to be fully functional, which impacts the shape and role of the brain. As a result, the system is best thought of as a whole---not chunked into perception, cognition and action, as many psychologists and neuroscientists are wont to do. In discussing these arguments, we talk about the role of representations in intelligence, go through a bit of the evolution of the nervous system, and remind ourselves that evolution does not necessarily optimize. Throughout, we ask how this take on neuroscience impacts our own work and try to avoid saying "represents".

  • E47: Deep Learning to Understand the Brain

    30/07/2019 Duración: 01h05min

    The recent advances in deep learning have done more than just make money for startups and tech companies. They've also infiltrated neuroscience! Deep neural networks---models originally inspired by the basics of the nervous system---are finding ever more applications in the quest to understand the brain. We talk about many of those uses in the episode. After first describing more traditional approaches to modeling behavior, we talk about how neuroscientists compare deep net models to real brains using both performance and neural activity. We then get into the attempts by the field of machine learning to understand their own models and how ML and neuroscience can share methods (and maybe certain cultural tendencies). Finally we talk about the use of deep nets to generate stimuli specifically tailored to drive real neurons to their extremes. Throughout, we notice how deep learning is "complicating the narrative", ask "are deep nets normative models?", and struggle to talk about a topic we actually know about.

  • E46: What We Learn from Model Organisms

    27/06/2019 Duración: 01h01min

    From worms to flies, and mice to macaques, neuroscientists study a range (but not very large range...) of animals when they study "the brain". On this episode we ask a lot of questions about these model organisms, such as: how are they chosen? should we use more diverse ones? and what is a model organism actually a model of? We also talk about how the development of genetic tools for certain animals, like mice, have made them the dominant lab animal and the difficulty of bringing a new model species onto the scene. We also get into the special role that simple organisms, like C. elegans, play and how we can extrapolate findings from these small animals to more complex ones. Throughout, special guest Adam Calhoun joins us in asking "What even is the purpose of neuroscience???" and discussing the extent to which mice do or do not see like humans.

  • E45: How Working Memory Works

    29/05/2019 Duración: 59min

    Working memory is the ability to keep something in mind several seconds after it's gone. Neurons don't tend to keep firing when their input is removed, so how does the brain hold on to information when it's out of sight? Scientists have been probing this question for decades. On this episode, we talk about how working memory is studied and the traditional view of how it works, which includes elevated persistent firing rates in neurons in the prefrontal cortex. The traditional view, however, is being challenged in many ways at the moment. As evidence of that we read a "dueling" paper on the topic, which argues for a view that incorporates bursts of firing, oscillations, and synaptic changes. In addition to covering the experimental evidence for different views, we also talk about the many computational models of working memory that have been developed over the years. Throughout we talk about energy efficiency, the difference between maintenance and manipulation, and the effects of putting scientific disagreeme

  • E44: Can a Biologist Fix a Radio?

    25/04/2019 Duración: 01h05min

    In 2002, cancer biologist Yuri Lazebnik raised and addressed the semi-facetious question "Can a biologist fix a radio?" in a short paper. The paper is a critique of current practices in the biological sciences, claiming they are inefficient at getting to truth. We discuss the stages of research progress in biological science Yuri describes, including the "paradoxical" stage where more facts leads to less understanding. We then dive into his view of how a biologist would approach a radio: describing what its parts look like, lesioning some of them, and making claims about what's necessary for the radio to work as a result. We reflect on how this framing of common biological research practices impacts our view of them and highlights how hard it is to understand complex systems. We talk about the (in)adequacy of Yuri's proposed solution to the problem (that biologists need to embrace formal, quantitative language) and the difference between engineering and science. Finally, we discuss a new take on this paper th

  • E43: What Are Glia Up to?

    28/03/2019 Duración: 01h05min

    Despite the fact that the brain is full of them, glial cells don't get much attention from neuroscientists. The traditional view of these non-neurons is that they are supportive cells---there to silently help neurons do what they need to do. On this episode we start by describing this traditional view, including types of glial cells and their roles. Then we get into the more interesting stuff. How do glia communicate with each other and with neurons? Turns out there are many chemical messages that get sent between these different cell types, including via the energy molecule ATP! We then talk about the ways in which these messages impact neurons and reasons why the role of glia may be hard for neuroscientists to see. In particular, glia seem to have a lot to say about the birth and control of synapses, making them important for scientists interested in learning. Finally we cover some of the diseases related to glia, such as multiple sclerosis and (surprisingly) depression. Throughout, we ask if glia are impor

  • E42: Learning Rules, Biological vs. Artificial

    26/02/2019 Duración: 01h02min

    For decades, neuroscientists have explored the ways in which neurons update and control the strength of their connections. For slightly fewer decades, machine learning researchers have been developing ways to train the connections between artificial neurons in their networks. The former endeavour shows us what happens in the brain and the latter shows us what's actually needed to make a system that works. Unfortunately, these two research directions have not settled on the same rules of learning. In this episode we will talk about the attempts to make artificial learning rules more biologically plausible in order to understand how the brain is capable of the powerful learning that it is. In particular, we focus on different models of biologically-plausible backpropagation---the standard method of training artificial neural networks. We start by explaining both backpropagation and biological learning rules (such as spike time dependent plasticity) and the ways in which the two differ. We then describe four dif

  • E41: Training and Diversity in Computational Neuroscience

    30/01/2019 Duración: 01h10min

    This very special episode of Unsupervised Thinking takes place entirely at the IBRO-Simons Computational Neuroscience Imbizo in Cape Town, South Africa! Computational neuroscience is a very interdisciplinary field and people come to it in many different ways from many different backgrounds. In this episode, you'll hear from a variety of summer school students who are getting some of their first exposure to computational neuroscience as they explain their background and what they find interesting about the field. In the second segment of the episode, we go into a conversation with the teaching assistants about what could make training in computational neuroscience better in the future and what we wish we had learned when we entered the field. Finally, we throw it back to the students to summarize the impact this summer school had on them and their future career plans.

  • E40: Global Science

    19/12/2018 Duración: 01h10min

    In the past few years, we've noticed researchers making more explicit efforts to engage with scientists in other countries, particularly those where science isn't well-represented. Inspired by these efforts, we took a historical dive into the international element of science with special guest Alex Antrobus. How have scientists viewed and communicated with their peers in other countries over time? To what extent do nationalist politics influence science and vice versa? How did the euro-centric view of science arise? In tackling these issues, we start in the 1700s and work our way up to the present, covering the "Republic of Letters," the Olympic model of scientific nationalism, communism, and decolonization. We end by discussing the ethical pros and cons of mentoring and building academic "outposts" in other countries. Throughout, we talk about the benefits of open science, the King of Spain's beard, and how Grace doesn't do sports.

  • E39: What Does the Cerebellum Do?

    29/11/2018 Duración: 01h51s

    Cerebellum literally means "little brain," and in a way, it has been treated as a second-class citizen in neuroscience for awhile. In this episode we describe the traditional view of the cerebellum as a circuit for motor control and associative learning and how its more cognitive roles have been overlooked. First we talk about the beautiful architecture of the cerebellum and the functions of its different cell types, including the benefits of diversity. We then discuss the evidence for non-motor functions of the cerebellum and why this evidence was hard to find until recently. During this, we struggle to explain what cognitive issues someone with a cerebellar lesion may have and special guest/cerebellum expert Alex Cayco-Gajic tests our cerebellar function. Finally, we end by lamenting the fact that good science is impossible and Alex tells us how the future of neuroscience is subcortical!

  • E38: Reinforcement Learning - Biological and Artificial

    28/10/2018 Duración: 56min

    Reinforcement learning is important for understanding behavior because it tells us how actions are guided by reward. But the topic also has a broader significance---as an example of the happy marriage that can come from blending computer science, psychology and neuroscience. In this way, RL is a poster child for what's known as Marr's levels analysis, an approach to understanding computation that essentially asks why, how, and where. On this episode we first define some of the basic terms of reinforcement learning (action, state, environment, policy, value). Then we break it down according to Marr's three levels: what is the goal of RL? How can we (or an artificial intelligence) learn better behavior through rewards? and where in the brain is this carried out? Also we get into the relationship between reinforcement learning and evolution, discuss what counts as a reward, and try to improvise some relatable examples involving cake, cigarettes, chess, and tomatoes.

  • E37: What is an Explanation? - Part 2

    27/09/2018 Duración: 57min

    In part two of our conversation on what counts as an explanation in science, we pickup with special guest David Barack giving his thoughts on the "model–mechanism–mapping" criteria for explanation. This leads us into a lengthy discussion on explanatory versus phenomenological (or "descriptive") models. We ask if there truly is a distinction between these model classes or if a sufficiently good description will end up being explanatory. We illustrate these points with examples such as the Nernst equation, the Hodgkin-Huxley model of the action potential, and multiple uses of Difference of Gaussians in neuroscience. Throughout, we ask such burning questions as: can a model be explanatory if the people who made it thought it wasn't? are diagrams explanations? and, is gravity descriptive or mechanistic?

  • E36: What is an Explanation? - Part 1

    29/08/2018 Duración: 51min

    As scientists, we throw around words like "explanation" a lot. We assume explaining stuff is part of what we're doing when we make and synthesize discoveries. But what does it actually take for something to be an explanation? Can a theory or model be successful without truly being one? How do these questions play out in computational neuroscience specifically? We bring in philosopher-neuroscientist David Barack to tackle this big topic. In part one of the conversation, David describes the historical trajectory of the concept of "explanation" in philosophy. We then take some time to try to define computational neuroscience, and discuss "computational chauvinism": the (extremist) view that the mind could be understood and explained independently of the brain. We end this first half of the conversation by defining the "3M" model of explanation and giving our initial reactions to it.

  • E35: Generative Models

    31/07/2018 Duración: 57min

    Machine learning has been making big strides in a lot of straightforward tasks, such as taking an image and labeling the objects in it. But what if you want an algorithm that can, for example, generate an image of an object? That's a much vaguer and more difficult request. And it's where generative models come in! We discuss the motivation for making generative models (in addition to making cool images) and how they help us understand the core components of our data. We also get into the specific types of generative models and how they can be trained to create images, text, sound and more. We then move onto the practical concerns that would arise in a world with good generative models: fake videos of politicians, AI assistants making our phone calls, and computer-generated novels. Finally, we connect these ideas to neuroscience, asking both how can neuroscientists make use of these and is the brain a generative model?

  • E34: The Gut-Brain Connection

    29/06/2018 Duración: 52min

    Because of the sheer number of neurons in the gut, the enteric nervous system is sometimes called the second brain. What're all those neurons doing down there? And what, or who, is controlling them? Science has recently revealed that the incredibly large population of microorganisms in the gut have a lot to say to the brain, by acting on these neurons and other mechanisms, and can impact everything from stress to obesity to autism. In this episode, we give the basic stats and facts about the enteric nervous system (and argue about whether it really is a "second brain") and cover how the gut can alter the brain via nerves, hormones, and the immune system. We then talk about what happens when mice are raised without gut microbes (weird) and whether yogurt has any chance of curing things like anxiety. Throughout, we marvel at how intuitive all this seems despite being incredibly difficult to actually study. All that plus: obscure literary references, Josh's hilariously extreme fear of snakes, multiple misuses of

  • E33: Predictive Coding

    30/05/2018 Duración: 01h34s

    You may have heard of predictive coding; it's a theory that gets around. In fact, it's been used to understand everything from the retina to consciousness. So, before we get into the details, we start this episode by describing our impressions of predictive coding. Where have we encountered it? Has it influenced our work? Why do philosophers like it? And, finally, what does it actually mean? Eventually we settle on a two-tiered definition: "hard" predictive coding refers to a very specific hypothesis about how the brain calculates errors, and "soft" predictive coding refers to the general idea that the brain predicts things. We then get into how predictive coding relates to other theories, like Bayesian modeling. But like Bayesian models, which we've covered on a previous episode, predictive coding is prone to "just-so" stories. So we discuss what concrete predictions predictive coding can make, and whether the data supports them. Finally, Grace tries to describe the free energy principle, which extends predi

  • E32: How Do We Study Behavior?

    01/05/2018 Duración: 59min

    There is a tension when it comes to the study of behavior in neuroscience. On the one hand, we would love to understand animals as they behave in the wild---with the full complexity of the stimuli they take in and the actions they emit. On the other hand, such complexity is almost antithetical to the scientific endeavor, where control over inputs and precise measurement of outputs is required. Throw in the constraints that come when trying to record from and manipulate neurons and you've got a real mess. In this episode, we discuss these tensions and the modern attempts to resolve them. First, we take the example of decision-making in rodents to showcase what behavior looks like in neuroscience experiments (and how strangely we use the term "decision-making"). In these studies, using more natural stimuli can help with training and lead to better neural responses. But does going natural make the analysis of the data more difficult? We then talk about how machine learning can be used to automate the analysis

página 1 de 3