Every well-formed BLOG model specifies a unique proper probability distribution over all possible worlds definable given its vocabulary
•No infinite receding ancestor chains;
•no conditioned cycles;
•all expressions finitely evaluable;
•Functions of countable sets
They instantiate some parts of the network and do inference with MCMC. I wonder how it compares to the Markov Logic approach from the University of Washington.
Interesting. They talk about plain old Metropolis Hastings, which is pretty questionable.
Anyone excited about this, I highly recommend checking out Stan; it's under active development, actually works with real problems, and is used in the real world. With NUTS and HMC they've really made good on their promises, and quite soon they'll have meaningful ADVI support. See this former discussion: https://news.ycombinator.com/item?id=10244771
Stan (http://mc-stan.org/) is impressive, but isn't this BLOG language easier to read and perhaps easier for novices to create models in? The marriage of the power of Stan with the ease and speed of implementation of BLOG could create the next generation of probablistically driven experiences by opening up the power to more, and that would be a cool thing.
PyMC3 uses Theano to create a compute graph of the model which then gets compiled to C. Moreover, it gives us the gradient for free so that HMC and NUTS can be used which work models of high complexity.
I use it in production, despite it still being beta. We're close to the first stable release but there are still some small kinks to figure out.
Perhaps it was an advanced defense measure against searches, since it was funded by a defence agency. \s
It will probably never get as popular as the generic term blog, so it will be difficult to search "how to do X in blog?", so it will probably never get as popular...
Perhaps they consider renaming it as bayelog or something.
"We've coded up the application that will run your business. It has a 80% chance of working correctly roughly 20% of the time with a 95% confidence interval."
I don't think they understand confidence intervals, or at least they think they do but they get it wrong. It's the same as misunderstanding p-value.
Confidence interval of 95% means that the estimator produces an interval that contains true parameter with probability 95%. It's not equivalent to the credible interval.
Detect soldiers on the ground in video streams with confidence levels, and let the drone kill them.
For example (pseudocode):
random Boolean IsRunning ~ BooleanDistrib(0.001);
random Boolean CarNearby ~ BooleanDistrib(0.001);
random Boolean HasGun ~ BooleanDistrib(0.002);
random Boolean IsTerrorist ~
if IsRunning then
if HasGun then BooleanDistrib(0.95)
else BooleanDistrib(0.04)
else
if CarNearby then BooleanDistrib(0.29)
else BooleanDistrib(0.001);
obs IsRunning = true;
obs HasGun = true;
query IsTerrorist;
Every well-formed BLOG model specifies a unique proper probability distribution over all possible worlds definable given its vocabulary •No infinite receding ancestor chains; •no conditioned cycles; •all expressions finitely evaluable; •Functions of countable sets
They instantiate some parts of the network and do inference with MCMC. I wonder how it compares to the Markov Logic approach from the University of Washington.