Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Bayesian Logic Programming (bayesianlogic.github.io)
125 points by jlturner on Dec 17, 2015 | hide | past | favorite | 20 comments


It's by Stuart Russell's group at Berkeley (he coauthored the AI bible with Peter Norvig). Their tutorial: http://bayesianlogic.github.io/download/BLOG-tutorial-2014.p... Page 58 has some good sample code. Semantics are on page 70.

Every well-formed BLOG model specifies a unique proper probability distribution over all possible worlds definable given its vocabulary •No infinite receding ancestor chains; •no conditioned cycles; •all expressions finitely evaluable; •Functions of countable sets

They instantiate some parts of the network and do inference with MCMC. I wonder how it compares to the Markov Logic approach from the University of Washington.


Interesting. They talk about plain old Metropolis Hastings, which is pretty questionable.

Anyone excited about this, I highly recommend checking out Stan; it's under active development, actually works with real problems, and is used in the real world. With NUTS and HMC they've really made good on their promises, and quite soon they'll have meaningful ADVI support. See this former discussion: https://news.ycombinator.com/item?id=10244771


Stan (http://mc-stan.org/) is impressive, but isn't this BLOG language easier to read and perhaps easier for novices to create models in? The marriage of the power of Stan with the ease and speed of implementation of BLOG could create the next generation of probablistically driven experiences by opening up the power to more, and that would be a cool thing.


I'm a bit familiar with PyMC, but all it seems to do is Gibbs sampling, which mixes horribly compared to HMC.

How easy would the transition to PyStan be?


Not trivial, but there's a Python wrapper available:

https://github.com/stan-dev/pystan


PyMC 3 implements HMC. It is still in beta but quite stable


If you're interested: http://pymc-devs.github.io/pymc3/

PyMC3 uses Theano to create a compute graph of the model which then gets compiled to C. Moreover, it gives us the gradient for free so that HMC and NUTS can be used which work models of high complexity.

I use it in production, despite it still being beta. We're close to the first stable release but there are still some small kinks to figure out.

Disclaimer: I'm a co-developer.


Probably pretty easy.

It is different, but the core semantics are the same so you just have to worry about new syntax (and worse python integration)


They really called their language "blog"? I have to think that wasn't the best name ever..


Perhaps it was an advanced defense measure against searches, since it was funded by a defence agency. \s

It will probably never get as popular as the generic term blog, so it will be difficult to search "how to do X in blog?", so it will probably never get as popular...

Perhaps they consider renaming it as bayelog or something.


Oh please this is so irritating

Until Google "got it", searching for R was a pain (that was before the -lang suffix got popular)

Pick an unique name with several letters and a moderately used word, like Python or Ruby, it's not hard.


"We've coded up the application that will run your business. It has a 80% chance of working correctly roughly 20% of the time with a 95% confidence interval."


That's _exactly_ how "machine learning" works, and nobody complains.

In fact, businesses can't get enough of it.

(Statistics isn't something strange to business-logic types anyways, they understand probabilities and confidence intervals.)


I don't think they understand confidence intervals, or at least they think they do but they get it wrong. It's the same as misunderstanding p-value.

Confidence interval of 95% means that the estimator produces an interval that contains true parameter with probability 95%. It's not equivalent to the credible interval.

https://stats.stackexchange.com/questions/2272/whats-the-dif...


More resources related to this subject:

http://probabilistic-programming.org

There's probably some alternative, actively developed projects that have the same objective as BLOG listed on that page.


Doesn't seem to be under active development. https://github.com/BayesianLogic/blog


Nice ungooglable project name


also check out FIGARO and Hakaru.


why would darpa fund this?


Detect soldiers on the ground in video streams with confidence levels, and let the drone kill them.

For example (pseudocode):

    random Boolean IsRunning ~ BooleanDistrib(0.001);
    random Boolean CarNearby ~ BooleanDistrib(0.001);
    random Boolean HasGun ~ BooleanDistrib(0.002);

    random Boolean IsTerrorist ~
      if IsRunning then
        if HasGun then BooleanDistrib(0.95)
        else  BooleanDistrib(0.04)
      else
        if CarNearby then BooleanDistrib(0.29)
        else BooleanDistrib(0.001);

    obs IsRunning = true;
    obs HasGun = true;

    query IsTerrorist;




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: