Attended the Whats Hot in IT event held by the Victorian ICT for Women group. See their Tweets about the event. Prof. Maria Garcia De La Banda (2nd from left) gave a fabulous overview.

## Archive for the ‘talks’ Category

## What’s Hot in IT

March 2, 2018## Notes on Determinantal Point Processes

September 11, 2017I’m giving a tutorial on these amazing processes while in Moscow. The source “book” for this is of course Alex Kulesza and Ben Taskar’s, “Determinantal Point Processes for Machine Learning”, *Foundations and Trends® in Machine Learning*: Vol. 5: No. 2–3, pp 123-286, 2012.

If you have an undergraduate in mathematics with loads of multi-linear algebra and real analysis, this stuff really is music for the mind. The connections and results are very cool. In my view these guys don’t spend enough time in their intro. on gram matrices, which really is the starting point for everything. In their online video tutorials they got this right, and lead with these results.

There is also a few interesting connections they didn’t mention. Anyway, I did some additional lecture notes to give some of the key results mentioned in the long article and elsewhere that didn’t make their tutorial slides.

## Advanced Methodologies for Bayesian Networks

August 22, 2017The 3rd Workshop on Advanced Methodologies for Bayesian Networks was run in Kyoto September 20-22, 2017. The workshop was well organised, and the talks were great. Really good invited talks by great speakers!

I’ll be talking about our (with François Petitjean, Nayyar Zaidi and Geoff Webb) recent work with Bayesian Network Classifiers:

**Backoff methods for estimating parameters of a Bayesian network**

Various authors have highlighted inadequacies of BDeu type scores and this problem is shared in parameter estimation. Basically, Laplace estimates work poorly, at least because setting the prior concentration is challenging. In 1997, Freidman et al suggested a simple backoff approach for Bayesian network classifiers (BNCs). Backoff methods dominate in in n-gram language models, with modified Kneser-Ney smoothing, being the best known, and a Bayesian variant exists in the form of Pitman-Yor process language models from Teh in 2006. In this talk we will present some results on using backoff methods for Bayes network classifiers and Bayesian networks generally. For BNCs at least, the improvements are dramatic and alleviate some of the issues of choosing too dense a network.

Slides are at the AMBN site, here. Note I spent a bit of time embellishing my slides with some fabulous historical Japanese artwork!

Software for the system is built on the amazing Chordalysis system of François Petitjean, and the code is available as HierarchicalDirichletProcessEstimation. Boy, Nayyar and François really can do good empirical work!

## Visiting and talks at HSE, Moscow

August 20, 2017**Introduction to Dirichlet Processes and their use,**at the workshop.

**Learning on networks of distributions for discrete data.**The HSE announcement is here.

## MDSS Seminar Series: Doing Bayesian Text Analysis

August 4, 2017Giving a talk to the Monash Data Science Society on August 28th. Details here. Its a historical perspective and motivational talk about doing text and document analysis. Slides are here.

## Lectures: Learning with Graphical Models

July 15, 2017I’m giving a series of lectures this semester combining graphical models and some elements of nonparametric statistics. The intent is to build up to the theory of discrete matrix factorisation and its many variations. The lectures start on 27th July and are mostly given weekly. Weekly details are given in the calendar too. The slides are on the Monash share drive under “Wray’s Slides” so if you are at Monash, do a search on Google drive to find them. If you cannot find them, email me for access.

**OK lectures over as of 24/10/2017! Have some other things to prepare.**

**Variational Algorithms and Expectation-Maximisation**, Lecture 6, 19/10/17, Wray Buntine

*No lectures this week, 12th October*, as I will be catching up on reviews and completing a journal article. Next week we’ll work through some examples of variational algorithms, including LDA with a HDP, a model whose VA theory has been thoroughly botched up historically.

**Gibbs Sampling, Variational Algorithms and Expectation-Maximisation**, Lecture 5, 05/10/17, Wray Buntine

**ASIDE: Determinantal Point Processes**, one off lecture, 28/09/17, Wray Buntine

*Foundations and Trends*article “Determinantal Point Processes for Machine Learning.” Topics covered are interpretations and definitions, probability operations such as marginalising and conditioning, and sampling. The tutorial makes great use of the knowledge of matrices and determinants.

*No lectures the following two weeks, 14th and 21st September*, as I will be on travel.

**Basic Distributions and Poisson Processes**, Lecture 4, 07/09/17, Wray Buntine

**Directed and Undirected Independence Models**, Lecture 3, 31/08/17, Wray Buntine

**Information and working with Independence**, Lecture 2, 17/08/17, Wray Buntine

*No lectures 03/08 (writing for ACML) and 10/08 (attending ICML)*.

**Motivating Probability and Decision Models**, Lecture 1, 27/07/17, Wray Buntine

## ICML 2017 paper: Leveraging Node Attributes for Incomplete Relational Data

May 19, 2017Here is a paper with Ethan Zhao and Lan Du, both of Monash, we’ll present in Sydney.

Relational data are usually highly incomplete in practice, which inspires us to leverage side information to improve the performance of community detection and link prediction. This paper presents a Bayesian probabilistic approach that incorporates various kinds of node attributes encoded in binary form in relational models with Poisson likelihood. Our method works flexibly with both directed and undirected relational networks. The inference can be done by efficient Gibbs sampling which leverages sparsity of both networks and node attributes. Extensive experiments show that our models achieve the state-of-the-art link prediction results, especially with highly incomplete relational data.

As usual, the reviews were entertaining, and some interesting results we didn’t get in the paper. Its always enlightening doing comparative experiments.