ArticlesNotesArchiveTagsAbout
  • Bayesian information theory

    April 9, 2021

    Shannon’s information theory defines quantity of information (e.g. self-information $-\lg p(x)$) in terms of probabilities. In the context of data compression, these probabilities are given a frequentist interpretation (Shannon makes this interpretation explicit in his 1948 paper). In Deconstructing Bayesian Inference, I introduced the idea of a subjective data distribution. If quantities of information are calculated using a subjective data distribution, what is their meaning? Below I will answer this question by building, from the ground-up, a different notion of Bayesian inference. …

    articleepistemologyinformation

  • Deconstructing Bayesian Inference

    March 31, 2021

    I pose the question: Why predict probabilities rather than predicting outcomes without probabilities? I first define Bayesian inference, and then I remove the probabilities involved in multiple passes until there is no probability. Then I examine what the result is, and eventually motivate bringing probabilities back into our predictions. …

    articleepistemology

  • Classical vs Bayesian Reasoning

    February 24, 2021

    My goal is to identify the core conceptual difference between someone who accepts “Bayesian reasoning” as a valid way to obtain knowledge about the world, vs someone who does not accept Bayesian reasoning, but does accept “classical reasoning”. By classical reasoning, I am referring to the various forms of logic that have been developed, starting with Aristotelian logic, through propositional logic like that of Frege, and culminating in formal mathematics (e.g. higher order type theory). In such logics, the goal is to uniquely determine the truth values of things (such as theorems and propositions) from givens. …

    articleepistemology

  • Bayesian Inference On 1st Order Logic

    February 21, 2021

    David Chapman’s blog post titled Probability theory does not extend logic has stirred up some controversy. In it, Chapman argues that so-called Bayesian logic, as it currently understood, is limited to propositional logic (0th order logic), but cannot generalize to higher order logics (e.g. predicate logic a.k.a. 1st order logic), and thus cannot be a general foundation for inference from data under uncertainty. Chapman provides a few counter-examples that supposedly demonstrate that doing Bayesian inference on statements in 1st order logic is incoherent. I think there is a lot of confusion surrounding this point because Chapman does not use proper probability notation. In the following article I show how Chapman’s examples can be properly written and made sense of using random variables. Hopefully this clarifies some things. …

    articlebayesian

  • Variational Solomonoff Induction

    February 18, 2021

    free-energymachine-learningvariational-MLnote

  • Active inference tutorial (actions)

    February 7, 2021

    free-energynote

  • Free Energy Principle 1st Pass

    February 7, 2021

    free-energynote

  • How this blog works

    February 6, 2021

    personalbloggingnote

  • Blogging experiment

    February 1, 2021

    personalbloggingnote

  • Probability Theory and Its Philosophy

    June 19, 2020

    Probability is a measure defined on events, which are sets of primitive outcomes. Probability theory mostly comes down to constructing events and measuring them. A measure is a generalization of size which corresponds to length, area, and volume (rather than the bijective mapping definition of cardinality). …

    articleprobability

  • 1
  • 2
  • 3
  • 4
  • 5

This is a blog by Dan Abolafia.

Powered by Hugo & Notepadium, with Tufte sprinkled in.